[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-junkunyuan--Awesome-Domain-Generalization":3,"tool-junkunyuan--Awesome-Domain-Generalization":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":81,"owner_url":82,"languages":81,"stars":83,"forks":84,"last_commit_at":85,"license":81,"difficulty_score":86,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":93,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":101,"updated_at":102,"faqs":103,"releases":104},3273,"junkunyuan\u002FAwesome-Domain-Generalization","Awesome-Domain-Generalization","Awesome things about domain generalization, including papers, code, etc.","Awesome-Domain-Generalization 是一个专注于“域泛化”领域的开源资源合集，旨在帮助开发者和研究人员应对机器学习模型在未知数据分布下性能下降的挑战。当训练数据与实际应用场景存在差异（如光照变化、不同设备采集等）时，模型往往难以适应，而域泛化技术正是解决这一“水土不服”问题的关键。\n\n该仓库系统性地整理了大量高质量学术资源，涵盖从基础理论分析、综述文章到各类前沿算法代码。其内容分类细致，不仅包括基于域对齐、数据增强、元学习、因果推断等主流技术路线的研究论文，还涉及单域泛化、联邦域泛化等进阶主题，并提供了相关数据集、开源库以及讲座教程链接。无论是想要快速了解领域全貌的初学者，还是致力于探索最新算法的资深研究员，都能在此找到极具价值的参考材料。通过汇聚分散的社区成果，Awesome-Domain-Generalization 极大地降低了获取专业知识的门槛，是推动域泛化技术研究与应用的重要基础设施。","# Awesome Domain Generalization\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\nThis repository is a collection of awesome things about **domain generalization**, including papers, code, etc.\n\nIf you would like to contribute to our repository or have any questions\u002Fadvice, see [Contributing & Contact](#contributing--contact).\n\n# Contents\n- [Awesome Domain Generalization](#awesome-domain-generalization)\n- [Contents](#contents)\n- [Papers](#papers)\n  - [Survey](#survey)\n  - [Theory \\& Analysis](#theory--analysis)\n  - [Dataset](#dataset)\n  - [Domain Generalization](#domain-generalization)\n    - [Domain Alignment-Based Methods](#domain-alignment-based-methods)\n    - [Data Augmentation-Based Methods](#data-augmentation-based-methods)\n    - [Meta-Learning-Based Methods](#meta-learning-based-methods)\n    - [Ensemble Learning-Based Methods](#ensemble-learning-based-methods)\n    - [Self-Supervised Learning-Based Methods](#self-supervised-learning-based-methods)\n    - [Disentangled Representation Learning-Based Methods](#disentangled-representation-learning-based-methods)\n    - [Regularization-Based Methods](#regularization-based-methods)\n    - [Normalization-Based Methods](#normalization-based-methods)\n    - [Information-Based Methods](#information-based-methods)\n    - [Causality-Based Methods](#causality-based-methods)\n    - [Inference-Time-Based Methods](#inference-time-based-methods)\n    - [Neural Architecture Search-based Methods](#neural-architecture-search-based-methods)\n  - [Single Domain Generalization](#single-domain-generalization)\n  - [Semi\u002FWeak\u002FUn-Supervised Domain Generalization](#semiweakun-supervised-domain-generalization)\n  - [Open\u002FHeterogeneous Domain Generalization](#openheterogeneous-domain-generalization)\n  - [Federated Domain Generalization](#federated-domain-generalization)\n  - [Source-free Domain Generalization](#source-free-domain-generalization)\n  - [Applications](#applications)\n    - [Person Re-Identification](#person-re-identification)\n    - [Face Recognition \\& Anti-Spoofing](#face-recognition--anti-spoofing)\n  - [Related Topics](#related-topics)\n    - [Life-Long Learning](#life-long-learning)\n- [Publications](#publications)\n- [Datasets](#datasets)\n- [Libraries](#libraries)\n- [Lectures \\& Tutorials \\& Talks](#lectures--tutorials--talks)\n- [Other Resources](#other-resources)\n- [Contributing \\& Contact](#contributing--contact)\n- [Acknowledgements](#acknowledgements)\n\n# Papers\n> We list papers, implementation code (the unofficial code is marked with *), etc, in the order of year and from journals to conferences. Note that some papers may fall into multiple categories.\n\n## Survey\n- Generalizing to Unseen Domains: A Survey on Domain Generalization [[IJCAI 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2103.03097)] [[Slides](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002FDGSurvey-ppt.pdf)] [155]\n- Domain Generalization in Vision: A Survey [[TPAMI 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02503)] [3]\n\n## Theory & Analysis\n> We list the papers that either provide inspiring theoretical analyses or conduct extensive empirical studies for domain generalization.\n\n- A Generalization Error Bound for Multi-Class Domain Generalization [[arXiv 2019](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.10392)] [123]\n- Domain Generalization by Marginal Transfer Learning [[JMLR 2021](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume22\u002F17-679\u002F17-679.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Faniketde\u002FDomainGeneralizationMarginal)] (**MTL**) [188]\n- The Risks of Invariant Risk Minimization [[ICLR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.05761)] [196]\n- In Search of Lost Domain Generalization [[ICLR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.01434.pdf?fbclid=IwAR1YkUXkIhC6fhr6eI687zBXo_W2tTjjTAFnyjEWvmq4gQKon_4pIDbTnQ4)] [134]\n- The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHendrycks_The_Many_Faces_of_Robustness_A_Critical_Analysis_of_Out-of-Distribution_ICCV_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Fimagenet-r)] [135]\n- An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fecf9902e0f61677c8de25ae60b654669-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdomainbed_measures)] [198]\n- Towards a Theoretical Framework of Out-Of-Distribution Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc5c1cb0bebd56ae38817b251ad72bedb-Paper.pdf)] [199]\n- Out-of-Distribution Generalization in Kernel Regression [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F691dcb1d65f31967a874d18383b9da75-Paper.pdf)] [205]\n- Quantifying and Improving Transferability in Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F5adaacd4531b78ff8b5cedfe3f4d5212-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FGordon-Guojun-Zhang\u002FTransferability-NeurIPS2021)] (**Transfer**) [206]\n- OoD-Bench: Quantifying and Understanding Two Dimensions of Out-of-Distribution Generalization [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FYe_OoD-Bench_Quantifying_and_Understanding_Two_Dimensions_of_Out-of-Distribution_Generalization_CVPR_2022_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fynysjtu\u002Food_bench)] (**OoD-Bench**) [214]\n- Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization [[WACV 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19774)] [[Code](https:\u002F\u002Fgithub.com\u002FNikosEfth\u002Fcrafting-shifts)] [232]\n  \n## Dataset\n- Free Viewpoint Action Recognition Using Motion History Volumes [[CVIU 2006](https:\u002F\u002Fhal.inria.fr\u002Fdocs\u002F00\u002F54\u002F46\u002F29\u002FPDF\u002Fcviu_motion_history_volumes.pdf)] (**IXMAS dataset**) [39]\n- Geodesic flow kernel for unsupervised domain adaptation [[CVPR 2012](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FFang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf)] (**Office-Caltech dataset**) [32]\n- Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias [[ICCV 2013](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FFang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf)] (**VLCS dataset**) [16]\n- Domain Generalization for Object Recognition with Multi-Task Autoencoders [[ICCV 2015](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FGhifary_Domain_Generalization_for_ICCV_2015_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FEmma0118\u002Fmate)] (**MTAE**, **Rotated MNIST dataset**) [6]\n- Scalable Person Re-identification: A Benchmark [[ICCV 2015](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FZheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf)] (**Market-1501 dataset**) [46]\n- The Cityscapes Dataset for Semantic Urban Scene Understanding [[CVPR 2016](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FCordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf)] (**Cityscapes dataset**) [44]\n- The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes [[CVPR 2016](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FRos_The_SYNTHIA_Dataset_CVPR_2016_paper.pdf)] (**SYNTHIA dataset**) [42]\n- Playing for Data: Ground Truth from Computer Games [[ECCV 2016](https:\u002F\u002Flinkspringer.53yu.com\u002Fchapter\u002F10.1007\u002F978-3-319-46475-6_7)] (**GTA5 dataset**) [43]\n- Performance Measures and a Data Set for Multi-target, Multi-camera Tracking [[ECCV 2016](https:\u002F\u002Flinkspringer.53yu.com\u002Fchapter\u002F10.1007\u002F978-3-319-48881-3_2)] (**Duke dataset**) [47]\n- VisDA: The Visual Domain Adaptation Challenge [[arXiv 2017](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06924.pdf)] (**VisDA-17 dataset**) [36]\n- Deep Hashing Network for Unsupervised Domain Adaptation [[CVPR 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FVenkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf)] (**OfficeHome dataset**) [20]\n- Deeper, Broader and Artier Domain Generalization [[ICCV 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Deeper_Broader_and_ICCV_2017_paper.pdf)] [[Code](https:\u002F\u002Fdali-dl.github.io\u002Fproject_iccv2017.html)] (**PACS dataset**) [2]\n- Learning Multiple Visual Domains with Residual Adapters [[NeurIPS 2017](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Ffile\u002Fe7b24b112a44fdd9ee93bdf998c6ca0e-Paper.pdf)] (**Visual Decathlon (VD) dataset**) [38]\n- Recognition in Terra Incognita [[ECCV 2018](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FBeery_Recognition_in_Terra_ECCV_2018_paper.pdf)] (**Terra Incognita dataset**) [45]\n- Invariant Risk Minimization [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf;)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)] (**IRM**, **Colored MNIST dataset**) [165]\n- Learning Robust Representations by Projecting Superficial Statistics Out [[ICLR 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1903.06256)] [[Code](https:\u002F\u002Fgithub.com\u002FHaohanWang\u002FHEX)] (**HEX**, **ImageNet-Sketch dataset**) [35]\n- Benchmarking Neural Network Robustness to Common Corruptions and Perturbations [[ICLR 2019](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.12261.pdf?ref=https:\u002F\u002Fgithubhelp.com)] (**CIFAR-10-C \u002F CIFAR-100-C \u002F ImageNet-C dataset**) [37]\n- Moment Matching for Multi-Source Domain Adaptation [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPeng_Moment_Matching_for_Multi-Source_Domain_Adaptation_ICCV_2019_paper.pdf)] [[Code](http:\u002F\u002Fai.bu.edu\u002FM3SDA\u002F)] (**DomainNet dataset**) [33]\n- Learning to Generate Novel Domains for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304)] [[Code](https:\u002F\u002Fgithub.com\u002Fmousecpn\u002FL2A-OT)] (**L2A-OT**, **Digits-DG dataset**) [28]\n- Domain Adaptive Ensemble Learning [[TIP 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.07325)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)] (**mini-DomainNet dataset**) [34]\n- Towards Non-IID Image Classification A Dataset and Baselines [[PR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.02899)] (**NICO dataset**) [108]\n- NICO++ Towards Better Benchmarking for Domain Generalization [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08040)] (**NICO++ dataset**) [183]\n- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts [[ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.06523)] [[Code](https:\u002F\u002Fgithub.com\u002FWeixin-Liang\u002FMetaShift)] (**MetaShift dataset**) [213]\n\n## Domain Generalization\n> To address the dataset\u002Fdomain shift problem [[109]](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320311002901?casa_token=qIu5tyPmlgQAAAAA:IDLcYED3jzUGsissKY_EuDLQTMCkGQrEWoAq542Cbcd4FKQinvp78Wgb6jhRiSLqGdQCvcifwprz)) [[110](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Frecht19a\u002Frecht19a.pdf))] [[111](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs10994-009-5152-4.pdf))] [[112]](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fd8330f857a17c53d217014ee776bfd50-Paper.pdf), domain generalization [[113](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2011\u002Ffile\u002Fb571ecea16a9824023ee1af16897a582-Paper.pdf)] aims to learn a model from source domain(s) and make it generalize well to unknown target domains.\n\n### Domain Alignment-Based Methods\n> Domain alignment-based methods aim to minimize divergence between source domains for learning domain-invariant representations.\n\n- Domain Generalization via Invariant Feature Representation [[ICML 2013](http:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fmuandet13.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fkrikamol\u002Fdg-dica)] (**DICA**) [65]\n- Domain-Adversarial Training of Neural Networks [[JMLR 2016](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume17\u002F15-239\u002F15-239.pdf)] [[Code](https:\u002F\u002Fgraal.ift.ulaval.ca\u002Fdann\u002F)] (**DANN**) [226]\n- Learning Attributes Equals Multi-Source Domain Generalization [[CVPR 2016](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FGan_Learning_Attributes_Equals_CVPR_2016_paper.pdf)] (**UDICA**) [120]\n- Robust Domain Generalisation by Enforcing Distribution Invariance [[IJCAI 2016](https:\u002F\u002Feprints.qut.edu.au\u002F115382\u002F15\u002FErfani2016IJCAI.pdf)] (**ESRand**) [66]\n- Scatter Component Analysis A Unified Framework for Domain Adaptation and Domain Generalization [[TPAMI 2017](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1510.04373)] (**SCA**) [67]\n- Unified Deep Supervised Domain Adaptation and Generalization [[ICCV 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FMotiian_Unified_Deep_Supervised_ICCV_2017_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fsamotiian\u002FCCSA)] (**CCSA**) [71]\n- Beyond Domain Adaptation: Unseen Domain Encapsulation via Universal Non-volume Preserving Models [[arXiv 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1812.03407)] (**UNVP**) [166]\n- Domain Generalization via Conditional Invariant Representation [[AAAI 2018](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F11682\u002F11541)] (**CIDG**) [68]\n- Domain Generalization with Adversarial Feature Learning [[CVPR 2018](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Domain_Generalization_With_CVPR_2018_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FYuqiCui\u002FMMD_AAE)] (**MMD-AAE**) [76]\n- Deep Domain Generalization via Conditional Invariant Adversarial Networks [[ECCV 2018](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FYa_Li_Deep_Domain_Generalization_ECCV_2018_paper.pdf)] (**CIDDG, CDANN**) [77]\n- Generalizing to Unseen Domains via Distribution Matching [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1911.00804)] [[Code](https:\u002F\u002Fgithub.com\u002Fbelaalb\u002FG2DM)] (**G2DM**) [81]\n- Image Alignment in Unseen Domains via Domain Deep Generalization [[arXiv 2019](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.12028)] (**DeGIA**) [169]\n- Multi-Adversarial Discriminative Deep Domain Generalization for Face Presentation Attack Detection [[CVPR 2019](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShao_Multi-Adversarial_Discriminative_Deep_Domain_Generalization_for_Face_Presentation_Attack_Detection_CVPR_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Frshaojimmy\u002FCVPR2019-MADDoG)] (**MADDG**) [78]\n- Generalizable Feature Learning in the Presence of Data Bias and Domain Class Imbalance with Application to Skin Lesion Classification [[MICCAI 2019](https:\u002F\u002Fwww.cs.sfu.ca\u002F~hamarneh\u002Fecopy\u002Fmiccai2019d.pdf)] [72]\n- Domain Generalization via Model-Agnostic Learning of Semantic Features [[NeurIPS 2019](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F2974788b53f73e7950e8aa49f3a306db-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fbiomedia-mira\u002Fmasf)] (**MASF**) [18]\n- Adversarial Invariant Feature Learning with Accuracy Constraint for Domain Generalization [[ECMLPKDD 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1904.12543)] [[Code](https:\u002F\u002Fgithub.com\u002Fakuzeee\u002FAFLAC)] (**AFLAC**) [84]\n- Feature Alignment and Restoration for Domain Generalization and Adaptation [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.12009)] (**FAR**) [189]\n- Representation via Representations: Domain Generalization via Adversarially Learned Invariant Representations [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2006.11478)] (**RVR**) [82]\n- Correlation-aware Adversarial Domain Adaptation and Generalization [[PR 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1911.12983)] [[Code](https:\u002F\u002Fgithub.com\u002Fmahfujur1\u002FCA-DA-DG)] (**CAADA**) [80]\n- Domain Generalization Using a Mixture of Multiple Latent Domains [[AAAI 2020](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6846\u002F6700)] [[Code](https:\u002F\u002Fgithub.com\u002Fmil-tokyo\u002Fdg_mmld)] [83]\n- Single-Side Domain Generalization for Face Anti-Spoofing [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Ftaylover-pei\u002FSSDG-CVPR2020)] (**SSDG**) [79]\n- Scanner Invariant Multiple Sclerosis Lesion Segmentation from MRI [[ISBI 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1910.10035)] [85]\n- Respecting Domain Relations: Hypothesis Invariance for Domain Generalization [[ICPR 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2010.07591)] (**HIR**) [74]\n- Domain Generalization via Multidomain Discriminant Analysis [[UAI 2020](http:\u002F\u002Fproceedings.mlr.press\u002Fv115\u002Fhu20a\u002Fhu20a.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Famber0309\u002FMultidomain-Discriminant-Analysis)] (**MDA**) [70]\n- Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization [[NeurIPS 2020](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F201d7288b4c18a679e48b31c72c30ded-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FLDDG)] (**LDDG**) [75]\n- Domain Generalization via Entropy Regularization [[NeurIPS 2020](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fb98249b38337c5088bbc660d8f872d6a-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fsshan-zhao\u002FDG_via_ER)] [86]\n- Iterative Feature Matching: Toward Provable Domain Generalization with Logarithmic Environments [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09913)] [192]\n- Semi-Supervised Domain Generalization in RealWorld: New Benchmark and Strong Baseline [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.10221)] [179]\n- Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.06736)] [[Code](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCSAC)] (**CSAC**) [161]\n- Multi-Domain Adversarial Feature Generalization for Person Re-Identification [[TIP 2021](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F83\u002F9263394\u002F09311771.pdf)] (**MMFA-AAE**) [144]\n- Scale Invariant Domain Generalization Image Recapture Detection [[ICONIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.03496)] (**SADG**) [177]\n- Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference [[IJCAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.10931)] (**VBCLS**) [195]\n- Domain Generalization using Causal Matching [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fmahajan21b\u002Fmahajan21b.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Frobustdg)] (**MatchDG**) [73]\n- Generalization on Unseen Domains via Inference-Time Label-Preserving Target Projections [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FPandey_Generalization_on_Unseen_Domains_via_Inference-Time_Label-Preserving_Target_Projections_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fyys-Polaris\u002FInferenceTimeDG)] [118]\n- Progressive Domain Expansion Network for Single Domain Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- Confidence Calibration for Domain Generalization Under Covariate Shift [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FGong_Confidence_Calibration_for_Domain_Generalization_Under_Covariate_Shift_ICCV_2021_paper.pdf)] [133]\n- On Calibration and Out-of-domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F118bd558033a1016fcc82560c65cca5f-Paper.pdf)] [154]\n- Domain-invariant Feature Exploration for Domain Generalization [[TMLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12020)] [[Code](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)] (**DIFEX**) [209]\n- Cross-Domain Ensemble Distillation for Domain Generalization [[ECCV 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.14058)] (**XDED**) [94]\n- Domain Generalisation via Risk Distribution Matching [[WACV 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18598)] [[Code](https:\u002F\u002Fgithub.com\u002Fnktoan\u002Frisk-distribution-matching)] (**RDM**) [234]\n\n### Data Augmentation-Based Methods\n> Data augmentation-based methods augment original data and train the model on the generated data to improve model robustness.\n\n- Certifying Some Distributional Robustness with Principled Adversarial Training [[arXiv 2017](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1710.10571.pdf])] [[Code](https:\u002F\u002Fgithub.com\u002Fduchi-lab\u002Fcertifiable-distributional-robustness)] [52]\n- Generalizing across Domains via Cross-Gradient Training [[ICLR 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1804.10745)] [[Code](https:\u002F\u002Fgithub.com\u002Fvihari\u002Fcrossgrad)] (**CrossGrad**) [53]\n- Generalizing to Unseen Domains via Adversarial Data Augmentation [[NeurIPS 2018](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Ffile\u002F1d94108e907bb8311d8802b48fd54b4a-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fgeneralize-unseen-domains)] [25]\n- Staining Invariant Features for Improving Generalization of Deep Convolutional Neural Networks in Computational Pathology [[Frontiers in Bioengineering and Biotechnology 2019](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2019.00198\u002Ffull?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Bioengineering_and_Biotechnology&id=474781)] [26]\n- Multi-component Image Translation for Deep Domain Generalization [[WACV 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1812.08974)] [[Code](https:\u002F\u002Fgithub.com\u002Fmahfujur1\u002Fmit-DG)] [167]\n- Domain Generalization by Solving Jigsaw Puzzles [[CVPR 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FCarlucci_Domain_Generalization_by_Solving_Jigsaw_Puzzles_CVPR_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Ffmcarlucci\u002FJigenDG)] (**JiGen**) [98]\n- Addressing Model Vulnerability to Distributional Shifts Over Image Transformation Sets [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FVolpi_Addressing_Model_Vulnerability_to_Distributional_Shifts_Over_Image_Transformation_Sets_ICCV_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fdomain-shift-robustness)] [21]\n- Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization Without Accessing Target Domain Data [[ICCV 2019](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FYue_Domain_Randomization_and_Pyramid_Consistency_Simulation-to-Real_Generalization_Without_Accessing_Target_ICCV_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fxyyue\u002FDRPC)] [62]\n- Hallucinating Agnostic Images to Generalize Across Domains [[ICCV workshop 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1808.01102)] [[Code](https:\u002F\u002Fgithub.com\u002Ffmcarlucci\u002FADAGE)] [63]\n- Improve Unsupervised Domain Adaptation with Mixup Training [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.00677)] [[Code*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**Mixup**) [227]\n- Improving the Generalizability of Convolutional Neural Network-Based Segmentation on CMR Images [[Frontiers in Cardiovascular Medicine 2020](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffcvm.2020.00105\u002Ffull)] [24]\n- Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation [[TMI 2020](https:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpmc\u002Farticles\u002Fpmc7393676\u002F)] (**BigAug**) [23]\n- Deep Domain-Adversarial Image Generation for Domain Generalisation [[AAAI 2020](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fdownload\u002F7003\u002F6857)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)] (**DDAIG**) [55]\n- Towards Universal Representation Learning for Deep Face Recognition [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FShi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FMatyushinMA\u002Funi_rep_deep_faces)] [22]\n- Heterogeneous Domain Generalization via Domain Mixup [[ICASSP 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.05448)] [[Code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FMIXALL)] [128]\n- Learning to Generate Novel Domains for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304)] [[Code](https:\u002F\u002Fgithub.com\u002Fmousecpn\u002FL2A-OT)] (**L2A-OT**, **Digits-DG dataset**) [28]\n- Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.09316)] [[Code](https:\u002F\u002Fgithub.com\u002Femma-sjwang\u002FEISNet)] (**EISNet**) [99]\n- Towards Recognizing Unseen Categories in Unseen Domains [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.12256.pdf?ref=https:\u002F\u002Fgithubhelp.com)] [[Code](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002FCuMix)] (**CuMix**) [57]\n- Rethinking Domain Generalization Baselines [[ICPR 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2101.09060)]\n- More is Better: A Novel Multi-view Framework for Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.12329)] [184]\n- Semi-Supervised Domain Generalization with Stochastic StyleMatch [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2106.00592)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fssdg-benchmark)] (**StyleMatch**) [54]\n- Better Pseudo-label Joint Domain-aware Label and Dual-classifier for Semi-supervised Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)] [156]\n- Out-of-domain Generalization from a Single Source: A Uncertainty Quantification Approach [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2108.02888)] [151]\n- Towards Principled Disentanglement for Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.13839)] [[Code](https:\u002F\u002Fgithub.com\u002Fhlzhang109\u002FDDG)] (**DDG**) [170]\n- MixStyle Neural Networks for Domain Generalization and Adaptation [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2107.02053)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fmixstyle-release)] (**MixStyle**) [58]\n- VideoDG: Generalizing Temporal Relations in Videos to Novel Domains [[TPAMI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.03716)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FVideoDG)] (**APN**) [197]\n- Domain Generalization by Marginal Transfer Learning [[JMLR 2021](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume22\u002F17-679\u002F17-679.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Faniketde\u002FDomainGeneralizationMarginal)] [188]\n- Domain Generalisation with Domain Augmented Supervised Contrastive Learning [[AAAI Student Abstract 2021](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FSA-197.LeHS.pdf)] (**DASCL**) [139]\n- DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation [[AAAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.09382)] [[Code](https:\u002F\u002Fgithub.com\u002FHaoyueBaiZJU\u002FDecAug)] (**DecAug**) [171]\n- Domain Generalization with Mixstyle [[ICLR 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2104.02008)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fmixstyle-release)] (**MixStyle**) [56]\n- Robust and Generalizable Visual Representation Learning via Random Convolutions [[ICLR 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.13003)] [[Code](https:\u002F\u002Fgithub.com\u002Fwildphoton\u002FRandConv)] (**RC**) [59]\n- Learning to Learn Single Domain Generalization [[CVPR 2020](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQiao_Learning_to_Learn_Single_Domain_Generalization_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FM-ADA)] (**M-ADA**) [27]\n- FSDR: Frequency Space Domain Randomization for Domain Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FHuang_FSDR_Frequency_Space_Domain_Randomization_for_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjxhuang0508\u002FFSDR)] (**FSDR**) [115]\n- FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FFedDG-ELCFS)] (**FedDG**) [147]\n- Uncertainty-guided Model Generalization to Unseen Domains [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FQiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FUMGUD)] [168]\n- Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FVolpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] (**Meta-DR**) [153]\n- A Fourier-Based Framework for Domain Generalization [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXu_A_Fourier-Based_Framework_for_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FFACT)] (**FACT**) [160]\n- Open Domain Generalization with Domain-Augmented Meta-Learning [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FShu_Open_Domain_Generalization_with_Domain-Augmented_Meta-Learning_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FOpenDG-DAML)] (**DAML**) [119]\n- A Simple Feature Augmentation for Domain Generalization [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FLi_A_Simple_Feature_Augmentation_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**SFA**) [142]\n- Universal Cross-Domain Retrieval Generalizing Across Classes and Domains [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FPaul_Universal_Cross-Domain_Retrieval_Generalizing_Across_Classes_and_Domains_ICCV_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmvp18\u002FUCDR)] (**SnMpNet**) [150]\n- Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475271)] [137]\n- Adversarial Teacher-Student Representation Learning for Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa2137a2ae8e39b5002a3f8909ecb88fe-Paper.pdf)] [203]\n- Model-Based Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa8f12d9486cbcc2fe0cfc5352011ad35-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Farobey1\u002Fmbdg)] (**MBDG**) [200]\n- Optimal Representations for Covariate Shift [[ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.00057)] [[Code](https:\u002F\u002Fgithub.com\u002Fryoungj\u002Foptdom)] (**CAD**) [223]\n- Label-Efficient Domain Generalization via Collaborative Exploration and Generalization [[MM 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.03644)] [[Code](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCEG)] (**CEG**) [211]\n- Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization [[WACV 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19774)] [[Code](https:\u002F\u002Fgithub.com\u002FNikosEfth\u002Fcrafting-shifts)] [232]\n\n### Meta-Learning-Based Methods\n> Meta-learning-based methods train the model on a meta-train set and improve its performance on a meta-test set for boosting out-of-domain generalization ability.\n\n- Learning to Generalize: Meta-Learning for Domain Generalization [[AAAI 2018](https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002FviewFile\u002F16067\u002F16547)] [[Code](https:\u002F\u002Fgithub.com\u002FHAHA-DL\u002FMLDG)] (**MLDG**) [1]\n- MetaReg: Towards Domain Generalization using Meta-Regularization [[NeurIPS 2018](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Ffile\u002F647bba344396e7c8170902bcf2e15551-Paper.pdf)] [[Code*](https:\u002F\u002Fgithub.com\u002Felliotbeck\u002FMetaReg_PyTorch)] (**MetaReg**) [4]\n- Feature-Critic Networks for Heterogeneous Domain Generalisation [[ICML 2019](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fli19l\u002Fli19l.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fliyiying\u002FFeature_Critic)] (**Feature-Critic**) [5]\n- Episodic Training for Domain Generalization [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLi_Episodic_Training_for_Domain_Generalization_ICCV_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FHAHA-DL\u002FEpisodic-DG)] (**Epi-FCR**) [7]\n- Domain Generalization via Model-Agnostic Learning of Semantic Features [[NeurIPS 2019](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F2974788b53f73e7950e8aa49f3a306db-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fbiomedia-mira\u002Fmasf)] (**MASF**) [18]\n- Domain Generalization via Semi-supervised Meta Learning [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.12658)] [[Code](https:\u002F\u002Fgithub.com\u002Fhosseinshn\u002FDGSML)] (**DGSML**) [127]\n- Frustratingly Simple Domain Generalization via Image Stylization [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2006.11207)] [[Code](https:\u002F\u002Fgithub.com\u002FGT-RIPL\u002FDomainGeneralization-Stylization)] [60]\n- Domain Generalization for Named Entity Boundary Detection via Metalearning [[TNNLS 2020](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9174763\u002F)] (**METABDRY**) [124]\n- Learning to Learn Single Domain Generalization [[CVPR 2020](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQiao_Learning_to_Learn_Single_Domain_Generalization_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FM-ADA)] (**M-ADA**) [27]\n- Learning to Learn with Variational Information Bottleneck for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07645)] (**MetaVIB**) [15]\n- Sequential Learning for Domain Generalization [[ECCV workshop 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.01377)] (**S-MLDG**) [14]\n- Shape-Aware Meta-Learning for Generalizing Prostate MRI Segmentation to Unseen Domains [[MICCAI 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.02035)] [[Code](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FSAML)] (**SAML**) [17]\n- More is Better: A Novel Multi-view Framework for Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.12329)] [184]\n- Few-Shot Classification in Unseen Domains by Episodic Meta-Learning Across Visual Domains [[ICIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.13539)] (**x-EML**) [180]\n- Meta-Learned Feature Critics for Domain Generalized Semantic Segmentation [[ICIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.13538)] [185]\n- MetaNorm: Learning to Normalize Few-Shot Batches Across Domains [[ICLR 2021](https:\u002F\u002Fopenreview.net\u002Fpdf?id=9z_dNsC4B5t)] [[Code](https:\u002F\u002Fgithub.com\u002FYDU-AI\u002FMetaNorm)] (**MetaNorm**) [19]\n- Learning to Generalize Unseen Domains via Memory-based Multi-Source Meta-Learning for Person Re-Identification [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhao_Learning_to_Generalize_Unseen_Domains_via_Memory-based_Multi-Source_Meta-Learning_for_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FHeliosZhao\u002FM3L)] (**M3L**) [12]\n- Uncertainty-guided Model Generalization to Unseen Domains [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FQiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FUMGUD)] [168]\n- Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FVolpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] (**Meta-DR**) [153]\n- Meta Batch-Instance Normalization for Generalizable Person Re-Identification [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_Meta_Batch-Instance_Normalization_for_Generalizable_Person_Re-Identification_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fbismex\u002FMetaBIN)] (**MetaBIN**) [13]\n- Open Domain Generalization with Domain-Augmented Meta-Learning [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FShu_Open_Domain_Generalization_with_Domain-Augmented_Meta-Learning_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FOpenDG-DAML)] (**DAML**) [119]\n- On Challenges in Unsupervised Domain Generalization [[NeurIPS workshop 2021](https:\u002F\u002Fproceedings.mlr.press\u002Fv181\u002Fnarayanan22a\u002Fnarayanan22a.pdf)] [178]\n- Exploiting Domain-Specific Features to Enhance Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmanhhabui\u002FmDSDI)] (**mDSDI**) [202]\n\n### Ensemble Learning-Based Methods\n> Ensemble learning-based methods mainly train a domain-specific model on each source domain, and then draw on collective wisdom to make accurate prediction.\n\n- Exploiting Low-Rank Structure from Latent Domains for Domain Generalization [[ECCV 2014](https:\u002F\u002Flinkspringer.53yu.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-319-10578-9_41.pdf)] [87]\n- Visual recognition by learning from web data: A weakly supervised domain generalization approach [[CVPR 2015](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fpapers\u002FNiu_Visual_Recognition_by_2015_CVPR_paper.pdf)] [89]\n- Multi-View Domain Generalization for Visual Recognition [[ICCV 2015](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FNiu_Multi-View_Domain_Generalization_ICCV_2015_paper.pdf)] (**MVDG**) [88]\n- Deep Domain Generalization With Structured Low-Rank Constraint [[TIP 2017](https:\u002F\u002Fpar.nsf.gov\u002Fservlets\u002Fpurl\u002F10065328)] [91]\n- Visual Recognition by Learning From Web Data via Weakly Supervised Domain Generalization [[TNNLS 2017](https:\u002F\u002Fbcmi.sjtu.edu.cn\u002Fhome\u002Fniuli\u002Fpaper\u002FVisual%20Recognition%20by%20Learning%20From%20Web%20Data%20via%20Weakly%20Supervised%20Domain%20Generalization.pdf)] [121]\n- Robust Place Categorization with Deep Domain Generalization [[IEEE Robotics and Automation Letters 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1805.12048)] [[Code](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002Fcaffe)] (**COLD**) [97]\n- Multi-View Domain Generalization Framework for Visual Recognition [[TNNLS 2018](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FNiu_Multi-View_Domain_Generalization_ICCV_2015_paper.pdf)] [122]\n- Domain Generalization with Domain-Specific Aggregation Modules [[GCPR 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1809.10966)] (**D-SAMs**) [92]\n- Best Sources Forward: Domain Generalization through Source-Specific Nets [[ICIP 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1806.05810)] [90]\n- Batch Normalization Embeddings for Deep Domain Generalization [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2011.12672)] (**BNE**) [96]\n- DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image Segmentation on Unseen Datasets [[TMI 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2010.06208)] (**DoFE**) [93]\n- MS-Net: Multi-Site Network for Improving Prostate Segmentation with Heterogeneous MRI Data [[TMI 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2002.03366)] [[Code](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FMS-Net)] (**MS-Net**) [95]\n- Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition [[ICLR 2020](https:\u002F\u002Fopenreview.net\u002Fpdf?id=H1lxVyStPH)] (**GCFN**) [126]\n- Learning to Optimize Domain Specific Normalization for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.04275)] (**DSON**) [94]\n- Class-conditioned Domain Generalization via Wasserstein Distributional Robust Optimization [[ICLR workshop 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.03676)] [175]\n- Domain and Content Adaptive Convolution for Domain Generalization in Medical Image Segmentation [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.05676)] (**DCAC**) [176]\n- Dynamically Decoding Source Domain Knowledge for Unseen Domain Generalization [[arXiv 2021](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FKarthik-Nandakumar-3\u002Fpublication\u002F355142270_Dynamically_Decoding_Source_Domain_Knowledge_For_Unseen_Domain_Generalization\u002Flinks\u002F61debe18034dda1b9ef16fc6\u002FDynamically-Decoding-Source-Domain-Knowledge-For-Unseen-Domain-Generalization.pdf)] (**D2SDK**) [174]\n- Domain Adaptive Ensemble Learning [[TIP 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.07325)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)] (**mini-DomainNet dataset**) [34]\n- Generalizable Person Re-identification with Relevance-aware Mixture of Experts [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.pdf)] (**RaMoE**) [187]\n- Learning Transferrable and Interpretable Representations for Domain Generalization [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475488)] (**DTN**) [131]\n- Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation [[MM 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)] (**KDDG**) [157]\n- TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F0f49c89d1e7298bb9930789c8ed59d48-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FShengcaiLiao\u002FQAConv)] (**TransMatcher**) [208]\n- Cross-Domain Ensemble Distillation for Domain Generalization [[ECCV 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.14058)] (**XDED**) [94]\n\n\n### Self-Supervised Learning-Based Methods\n> Self-supervised learning-based methods improve model generalization by solving some pretext tasks with data itself.\n\n- Domain Generalization for Object Recognition with Multi-Task Autoencoders [[ICCV 2015](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FGhifary_Domain_Generalization_for_ICCV_2015_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FEmma0118\u002Fmate)] (**MTAE**, **Rotated MNIST dataset**) [6]\n- Domain Generalization by Solving Jigsaw Puzzles [[CVPR 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FCarlucci_Domain_Generalization_by_Solving_Jigsaw_Puzzles_CVPR_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Ffmcarlucci\u002FJigenDG)] (**JiGen**) [98]\n- Improving Out-Of-Distribution Generalization via Multi-Task Self-Supervised Pretraining [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.13525)] [102]\n- Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition [[ICLR 2020](https:\u002F\u002Fopenreview.net\u002Fpdf?id=H1lxVyStPH)] (**GCFN**) [126]\n- Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.09316)] [[Code](https:\u002F\u002Fgithub.com\u002Femma-sjwang\u002FEISNet)] (**EISNet**) [99]\n- Zero Shot Domain Generalization [[BMVC 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2008.07443)] [[Code](https:\u002F\u002Fgithub.com\u002Faniketde\u002FZeroShotDG)] [100]\n- Out-of-domain Generalization from a Single Source: A Uncertainty Quantification Approach [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2108.02888)] [151]\n- Self-Supervised Learning Across Domains [[TPAMI 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.12368)] [[Code](https:\u002F\u002Fgithub.com\u002Fsilvia1993\u002FSelf-Supervised_Learning_Across_Domains)] [101]\n- Multi-Domain Adversarial Feature Generalization for Person Re-Identification [[TIP 2021](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F83\u002F9263394\u002F09311771.pdf)] (**MMFA-AAE**) [144]\n- Scale Invariant Domain Generalization Image Recapture Detection [[ICONIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.03496)] (**SADG**) [177]\n- Domain Generalisation with Domain Augmented Supervised Contrastive Learning [[AAAI Student Abstract 2021](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FSA-197.LeHS.pdf)]\n- Progressive Domain Expansion Network for Single Domain Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FFedDG-ELCFS)] (**FedDG**) [147]\n- Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FLiang_Boosting_the_Generalization_Capability_in_Cross-Domain_Few-Shot_Learning_via_Noise-Enhanced_ICCV_2021_paper.pdf)] (**NSAE**) [194]\n- A Style and Semantic Memory Mechanism for Domain Generalization [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FChen_A_Style_and_Semantic_Memory_Mechanism_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**STEAM**) [130]\n- SelfReg: Self-Supervised Contrastive Regularization for Domain Generalization [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FKim_SelfReg_Self-Supervised_Contrastive_Regularization_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**SelfReg**) [138]\n- Domain Generalization for Mammography Detection via Multi-style and Multi-view Contrastive Learning [[MICCAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.10827)] [[Code](https:\u002F\u002Fgithub.com\u002Flizheren\u002FMSVCL_MICCAI2021)] (**MSVCL**) [172]\n- Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475271)] [137]\n- Adversarial Teacher-Student Representation Learning for Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa2137a2ae8e39b5002a3f8909ecb88fe-Paper.pdf)]\n- Domain Generalization via Contrastive Causal Learning [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02655)] (**CCM**) [212]\n- Towards Unsupervised Domain Generalization [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhang_Towards_Unsupervised_Domain_Generalization_CVPR_2022_paper.pdf)] (**DARLING**) [69]\n- Unsupervised Domain Generalization by Learning a Bridge Across Domains [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FHarary_Unsupervised_Domain_Generalization_by_Learning_a_Bridge_Across_Domains_CVPR_2022_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fleokarlin\u002FBrAD)] (**BrAD**) [182]\n\n### Disentangled Representation Learning-Based Methods\n> Disentangled representation learning-based methods aim to disentangle domain-specific and domain-invariant parts from source data, and then adopt the domain-invariant one for inference on the target domains.\n\n- Undoing the Damage of Dataset Bias [[ECCV 2012](https:\u002F\u002Flinkspringer.53yu.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-642-33718-5_12.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fadikhosla\u002Fundoing-bias)] [103]\n- Deeper, Broader and Artier Domain Generalization [[ICCV 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Deeper_Broader_and_ICCV_2017_paper.pdf)] [[Code](https:\u002F\u002Fdali-dl.github.io\u002Fproject_iccv2017.html)] [2]\n- DIVA: Domain Invariant Variational Autoencoders [[ICML workshop 2019](http:\u002F\u002Fproceedings.mlr.press\u002Fv121\u002Filse20a\u002Filse20a.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FDIVA)] (**DIVA**) [107]\n- Efficient Domain Generalization via Common-Specific Low-Rank Decomposition [[ICML 2020](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fpiratla20a\u002Fpiratla20a.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fvihari\u002FCSD)] (**CSD**) [105]\n- Cross-Domain Face Presentation Attack Detection via Multi-Domain Disentangled Representation Learning [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_Cross-Domain_Face_Presentation_Attack_Detection_via_Multi-Domain_Disentangled_Representation_Learning_CVPR_2020_paper.pdf)] [106]\n- Learning to Balance Specificity and Invariance for In and Out of* Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2008.12839)] [[Code](https:\u002F\u002Fgithub.com\u002Fprithv1\u002FDMG)] (**DMG**) [104]\n- Towards Principled Disentanglement for Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.13839)] [[Code](https:\u002F\u002Fgithub.com\u002Fhlzhang109\u002FDDG)] (**DDG**) [170]\n- Meta-Learned Feature Critics for Domain Generalized Semantic Segmentation [[ICIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.13538)] [185]\n- DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation [[AAAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.09382)] [[Code](https:\u002F\u002Fgithub.com\u002FHaoyueBaiZJU\u002FDecAug)] (**DecAug**) [171]\n- Robustnet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_RobustNet_Improving_Domain_Generalization_in_Urban-Scene_Segmentation_via_Instance_Selective_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fshachoi\u002FRobustNet)] (**RobustNet**) [193]\n- Reducing Domain Gap by Reducing Style Bias [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FNam_Reducing_Domain_Gap_by_Reducing_Style_Bias_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fhyeonseobnam\u002Fsagnet)] (**SagNet**)  [230]\n- Shape-Biased Domain Generalization via Shock Graph Embeddings [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FNarayanan_Shape-Biased_Domain_Generalization_via_Shock_Graph_Embeddings_ICCV_2021_paper.pdf)] [149]\n- Domain-Invariant Disentangled Network for Generalizable Object Detection [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FLin_Domain-Invariant_Disentangled_Network_for_Generalizable_Object_Detection_ICCV_2021_paper.pdf)] [143]\n- Domain Generalization via Feature Variation Decorrelation [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475311)] [146]\n- Exploiting Domain-Specific Features to Enhance Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmanhhabui\u002FmDSDI)] (**mDSDI**) [202]\n- Variational Disentanglement for Domain Generalization [[TMLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.05826)] (**VDN**) [210]\n- Intra-Source Style Augmentation for Improved Domain Generalization [[WACV 2023](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.10175.pdf)] (**ISSA**) [215]\n\n### Regularization-Based Methods\n> Regularization-based methods leverage regularization terms to prevent the overfitting, or design optimization strategies to guide the training.\n\n- Generalizing from Several Related Classification Tasks to a New Unlabeled Sample [[NeurIPS 2011](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2011\u002Ffile\u002Fb571ecea16a9824023ee1af16897a582-Paper.pdf)] [113]\n- MetaReg: Towards Domain Generalization using Meta-Regularization [[NeurIPS 2018](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Ffile\u002F647bba344396e7c8170902bcf2e15551-Paper.pdf)] [[Code*](https:\u002F\u002Fgithub.com\u002Felliotbeck\u002FMetaReg_PyTorch)] (**MetaReg**) [4]\n- Invariant Risk Minimization [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf;)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)] (**IRM**, **Colored MNIST dataset**) [165]\n- Learning Robust Representations by Projecting Superficial Statistics Out [[ICLR 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1903.06256)] [[Code](https:\u002F\u002Fgithub.com\u002FHaohanWang\u002FHEX)] (**HEX**, **ImageNet-Sketch dataset**) [35]\n- Distributionally Robust Neural Networks for Group Shifts On the Importance of Regularization for Worst-Case Generalization [[ICLR 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.08731)] [[Code](https:\u002F\u002Fgithub.com\u002Fkohpangwei\u002Fgroup_DRO)] (**DroupDRO**) [218]\n- Self-challenging Improves Cross-Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.02454)] [[Code](https:\u002F\u002Fgithub.com\u002FDeLightCMU\u002FRSC)] (**RSC**) [64]\n- Energy-based Out-of-distribution Detection [[NeurIPS 2020](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Ff5496252609c43eb8a3d147ab9b9c006-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fxieshuqin\u002FEnergy-OOD)] [181]\n- When Can We Formulate the Out-of-Distribution Generalization Problem as an Invariance Problem? [[arXiv 2021](https:\u002F\u002Fopenreview.net\u002Fpdf?id=FzGiUKN4aBp)] [[Code*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**IGA**) [219]\n- Learning Representations that Support Robust Transfer of Predictors  [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.09940)] [[Code](https:\u002F\u002Fgithub.com\u002FNewbeeer\u002FTRM)] (**TRM**) [220]\n- SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of Invariances in Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.02266)] [[Code*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**SANDMask**)  [222]\n- Out-of-Distribution Generalization via Risk Extrapolation [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fkrueger21a\u002Fkrueger21a.pdf)] (**VREx**) [190]\n- Learning Explanations that are Hard to Vary [[ICLR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.00329)] [[Code*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**ANDMask**) [221]\n- A Fourier-Based Framework for Domain Generalization [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXu_A_Fourier-Based_Framework_for_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FFACT)] (**FACT**) [160]\n- Domain Generalization via Gradient Surgery [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FMansilla_Domain_Generalization_via_Gradient_Surgery_ICCV_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Flucasmansilla\u002FDGvGS)] (**Agr**) [148]\n- SelfReg: Self-Supervised Contrastive Regularization for Domain Generalization [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FKim_SelfReg_Self-Supervised_Contrastive_Regularization_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**SelfReg**) [138]\n- Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation [[MM 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)]\n- Model-Based Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa8f12d9486cbcc2fe0cfc5352011ad35-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Farobey1\u002Fmbdg)] (**MBDG**) [200]\n- Swad: Domain Generalization by Seeking Flat Minima [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fbcb41ccdc4363c6848a1d760f26c28a0-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fkhanrc\u002Fswad)] (**SWAD**) [201]\n- Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa02ef8389f6d40f84b50504613117f88-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fanshuln\u002FTraining-for-the-Future)] (**GI**) [204]\n- Adaptive Risk Minimization: Learning to Adapt to Domain Shift [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc705112d1ec18b97acac7e2d63973424-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fhenrikmarklund\u002Farm)] (**ARM**) [228]\n- Gradient Starvation: A Learning Proclivity in Neural Networks [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F0987b8b338d6c90bbedd8631bc499221-Paper.pdf)] [[Code*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**SD**) [225]\n- Quantifying and Improving Transferability in Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F5adaacd4531b78ff8b5cedfe3f4d5212-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FGordon-Guojun-Zhang\u002FTransferability-NeurIPS2021)] [206]\n- Gradient Matching for Domain Generalization [[ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.09937)] [[Code](https:\u002F\u002Fgithub.com\u002FYugeTen\u002Ffish)] (**Fish**) [224]\n- Fishr: Invariant Gradient Variances for Our-of-distribution Generalization [[ICML 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.02934)] [[Code](https:\u002F\u002Fgithub.com\u002Falexrame\u002Ffishr)] (**Fishr**) [173]\n- Global-Local Regularization Via Distributional Robustness [[AISTATS 2023]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.00553) [[Code](https:\u002F\u002Fgithub.com\u002FVietHoang1512\u002FGLOT)] (**GLOT**) [231]\n- Domain Generalisation via Risk Distribution Matching [[WACV 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18598)] [[Code](https:\u002F\u002Fgithub.com\u002Fnktoan\u002Frisk-distribution-matching)] (**RDM**) [234]\n\n### Normalization-Based Methods\n> Normalization-based methods calibrate data from different domains by normalizing them with their statistic.\n\n- Deep CORAL: Correlation Alignment for Deep Domain Adaptation [[ECCV 2016](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.01719)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**CORAL**) [229]\n- Batch Normalization Embeddings for Deep Domain Generalization [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2011.12672)] (**BNE**) [96]\n- Learning to Optimize Domain Specific Normalization for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.04275)] (**DSON**) [94]\n- MetaNorm: Learning to Normalize Few-Shot Batches Across Domains [[ICLR 2021](https:\u002F\u002Fopenreview.net\u002Fpdf?id=9z_dNsC4B5t)] [[Code](https:\u002F\u002Fgithub.com\u002FYDU-AI\u002FMetaNorm)] (**MetaNorm**) [19]\n- Meta Batch-Instance Normalization for Generalizable Person Re-Identification [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_Meta_Batch-Instance_Normalization_for_Generalizable_Person_Re-Identification_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fbismex\u002FMetaBIN)] (**MetaBIN**) [13]\n- - Learning to Generalize Unseen Domains via Memory-based Multi-Source Meta-Learning for Person Re-Identification [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhao_Learning_to_Generalize_Unseen_Domains_via_Memory-based_Multi-Source_Meta-Learning_for_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FHeliosZhao\u002FM3L)] (**M3L**) [12]\n- Adversarially Adaptive Normalization for Single Domain Generalization [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FFan_Adversarially_Adaptive_Normalization_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)]  (**ASR**) [116]\n- Collaborative Optimization and Aggregation for Decentralized Domain Generalization and Adaptation [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWu_Collaborative_Optimization_and_Aggregation_for_Decentralized_Domain_Generalization_and_Adaptation_ICCV_2021_paper.pdf)] (**COPDA**) [159]\n- Domain Generalization through Audio-Visual Relative Norm Alignment in First Person Action Recognition [[WACV 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FPlanamente_Domain_Generalization_Through_Audio-Visual_Relative_Norm_Alignment_in_First_Person_WACV_2022_paper.pdf)] (**RNA-Net**) [186]\n\n### Information-Based Methods\n> Information-based methods utilize techniques of information theory to realize domain generalization.\n\n- Learning to Learn with Variational Information Bottleneck for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07645)] (**MetaVIB**) [15]\n- Progressive Domain Expansion Network for Single Domain Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- Learning To Diversify for Single Domain Generalization [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWang_Learning_To_Diversify_for_Single_Domain_Generalization_ICCV_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FBUserName\u002FLearning)] [158]\n- Invariance Principle Meets Information Bottleneck for Out-Of-Distribution Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fahujak\u002FIB-IRM)] (**IB-IRM**) [207]\n- Exploiting Domain-Specific Features to Enhance Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmanhhabui\u002FmDSDI)] (**mDSDI**) [202]\n- Invariant Information Bottleneck for Domain Generalization [[AAAI 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06333)] [[Code](https:\u002F\u002Fgithub.com\u002FLuodian\u002FIIB\u002Ftree\u002FIIB)] (**IIB**) [140]\n\n### Causality-Based Methods\n> Causality-based methods analyze and address the domain generalization problem from a causal perspective.\n\n- Invariant Risk Minimization [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf;)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)] (**IRM**, **Colored MNIST dataset**) [165]\n- Learning Domain-Invariant Relationship with Instrumental Variable for Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01438)] (**IV-DG**) [163]\n- A Causal Framework for Distribution Generalization [[TPAMI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.07433)] [[Code](https:\u002F\u002Frunesen.github.io\u002FNILE\u002F)] (**NILE**) [191]\n- Domain Generalization using Causal Matching [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fmahajan21b\u002Fmahajan21b.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Frobustdg)] (**MatchDG**) [73]\n- Deep Stable Learning for Out-of-Distribution Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Deep_Stable_Learning_for_Out-of-Distribution_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fxxgege\u002FStableNet)] (**StableNet**) [117]\n- Out-of-Distribution Generalization via Risk Extrapolation [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fkrueger21a\u002Fkrueger21a.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**VREx**) [217]\n- A Style and Semantic Memory Mechanism for Domain Generalization [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FChen_A_Style_and_Semantic_Memory_Mechanism_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**STEAM**) [130]\n- Learning Causal Semantic Representation for Out-of-Distribution Prediction [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F310614fca8fb8e5491295336298c340f-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fchangliu00\u002Fcausal-semantic-generative-model)] (**CSG-ind**) [145]\n- Recovering Latent Causal Factor for Generalization to Distributional Shifts [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F8c6744c9d42ec2cb9e8885b54ff744d0-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fwubotong\u002FLaCIM)] (**LaCIM**) [152]\n- On Calibration and Out-of-domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F118bd558033a1016fcc82560c65cca5f-Paper.pdf)]\n- Invariance Principle Meets Information Bottleneck for Out-Of-Distribution Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fahujak\u002FIB-IRM)] (**IB-ERM**, **IB-IRM**) [207]\n- Domain Generalization via Contrastive Causal Learning [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02655)] (**CCM**) [212]\n- Invariant Causal Mechanisms through Distribution Matching [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.11646)] [[Code*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**CausIRL-CORAL**, **CausIRL-MMD**) [216]\n- Invariant Information Bottleneck for Domain Generalization [[AAAI 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06333)] [[Code](https:\u002F\u002Fgithub.com\u002FLuodian\u002FIIB\u002Ftree\u002FIIB)] (**IIB**) [140]\n- Causal Inference via Style Transfer for Out-of-distribution Generalisation [[KDD 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03063)] [[Code](https:\u002F\u002Fgithub.com\u002Fnktoan\u002FCausal-Inference-via-Style-Transfer-for-OOD-Generalisation)] (**FAST, FAFT, FAGT**) [233]\n\n### Inference-Time-Based Methods\n> Inference-time-based methods leverage the unlabeled target data, which is available at inference-time, to improve generalization performance without further model training.\n\n- Generalization on Unseen Domains via Inference-Time Label-Preserving Target Projections [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FPandey_Generalization_on_Unseen_Domains_via_Inference-Time_Label-Preserving_Target_Projections_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fyys-Polaris\u002FInferenceTimeDG)] [118]\n- Adaptive Methods for Real-World Domain Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDubey_Adaptive_Methods_for_Real-World_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fabhimanyudubey\u002FGeoYFCC)] (**DA-ERM**) [132]\n- Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F1415fe9fea0fa1e45dddcff5682239a0-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmatsuolab\u002FT3A)] (**T3A**) [136]\n\n### Neural Architecture Search-based Methods\n> Neural architecture search-based methods aim to dynamically tune the network architecture to improve out-of-domain generalization.\n\n- NAS-OoD Neural Architecture Search for Out-of-Distribution Generalization [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FBai_NAS-OoD_Neural_Architecture_Search_for_Out-of-Distribution_Generalization_ICCV_2021_paper.pdf)] (**NAS-OoD**) [129]\n\n## Single Domain Generalization\n> The goal of single domain generalization task is to improve model performance on unknown target domains by using data from only one source domain.\n\n- Learning to Learn Single Domain Generalization [[CVPR 2020](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQiao_Learning_to_Learn_Single_Domain_Generalization_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FM-ADA)] (**M-ADA**) [27]\n- Out-of-domain Generalization from a Single Source: A Uncertainty Quantification Approach [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2108.02888)] [151]\n- Uncertainty-guided Model Generalization to Unseen Domains [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FQiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FUMGUD)] [168]\n- Adversarially Adaptive Normalization for Single Domain Generalization [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FFan_Adversarially_Adaptive_Normalization_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)]  (**ASR**) [116]\n- Progressive Domain Expansion Network for Single Domain Generalization [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- Learning To Diversify for Single Domain Generalization [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWang_Learning_To_Diversify_for_Single_Domain_Generalization_ICCV_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FBUserName\u002FLearning)] [158]\n- Intra-Source Style Augmentation for Improved Domain Generalization [[WACV 2023](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.10175.pdf)] (**ISSA**) [215]\n- Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization [[WACV 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19774)] [[Code](https:\u002F\u002Fgithub.com\u002FNikosEfth\u002Fcrafting-shifts)] [232]\n  \n## Semi\u002FWeak\u002FUn-Supervised Domain Generalization\n> Semi\u002Fweak-supervised domain generalization assumes that a part of the source data is unlabeled, while unsupervised domain generalization assumes no training supervision.\n\n- Visual recognition by learning from web data: A weakly supervised domain generalization approach [[CVPR 2015](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fpapers\u002FNiu_Visual_Recognition_by_2015_CVPR_paper.pdf)] [89]\n- Visual Recognition by Learning From Web Data via Weakly Supervised Domain Generalization [[TNNLS 2017](https:\u002F\u002Fbcmi.sjtu.edu.cn\u002Fhome\u002Fniuli\u002Fpaper\u002FVisual%20Recognition%20by%20Learning%20From%20Web%20Data%20via%20Weakly%20Supervised%20Domain%20Generalization.pdf)] [121]\n- Domain Generalization via Semi-supervised Meta Learning [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.12658)] [[Code](https:\u002F\u002Fgithub.com\u002Fhosseinshn\u002FDGSML)] (**DGSML**) [127]\n- Deep Semi-supervised Domain Generalization Network for Rotary Machinery Fault Diagnosis under Variable Speed [[IEEE Transactions on Instrumentation and Measurement 2020](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FYixiao-Liao\u002Fpublication\u002F341199775_Deep_Semisupervised_Domain_Generalization_Network_for_Rotary_Machinery_Fault_Diagnosis_Under_Variable_Speed\u002Flinks\u002F613f088201846e45ef450a0a\u002FDeep-Semisupervised-Domain-Generalization-Network-for-Rotary-Machinery-Fault-Diagnosis-Under-Variable-Speed.pdf)] (**DSDGN**) [125]\n- Semi-Supervised Domain Generalization with Stochastic StyleMatch [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2106.00592)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fssdg-benchmark)] (**StyleMatch**) [54]\n- Better Pseudo-label Joint Domain-aware Label and Dual-classifier for Semi-supervised Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)] [156]\n- Semi-Supervised Domain Generalization in RealWorld: New Benchmark and Strong Baseline [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.10221)] [179]\n- On Challenges in Unsupervised Domain Generalization [[NeurIPS workshop 2021](https:\u002F\u002Fproceedings.mlr.press\u002Fv181\u002Fnarayanan22a\u002Fnarayanan22a.pdf)] [178]\n- Domain-Specific Bias Filtering for Single Labeled Domain Generalization [[IJCV 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00726)] [[Code](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FDSBF)] (**DSBF**) [162]\n- Towards Unsupervised Domain Generalization [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhang_Towards_Unsupervised_Domain_Generalization_CVPR_2022_paper.pdf)] (**DARLING**) [69]\n- Unsupervised Domain Generalization by Learning a Bridge Across Domains [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FHarary_Unsupervised_Domain_Generalization_by_Learning_a_Bridge_Across_Domains_CVPR_2022_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fleokarlin\u002FBrAD)] (**BrAD**) [182]\n- Label-Efficient Domain Generalization via Collaborative Exploration and Generalization [[MM 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.03644)] [[Code](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCEG)] (**CEG**) [211]\n\n## Open\u002FHeterogeneous Domain Generalization\n> Open\u002Fheterogeneous domain generalization assumes the label space of one domain is different from that of another domain.\n\n- Feature-Critic Networks for Heterogeneous Domain Generalisation [[ICML 2019](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fli19l\u002Fli19l.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fliyiying\u002FFeature_Critic)] (**Feature-Critic**) [5]\n- Episodic Training for Domain Generalization [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLi_Episodic_Training_for_Domain_Generalization_ICCV_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FHAHA-DL\u002FEpisodic-DG)] (**Epi-FCR**) [7]\n- Towards Recognizing Unseen Categories in Unseen Domains [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.12256.pdf?ref=https:\u002F\u002Fgithubhelp.com)] [[Code](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002FCuMix)] (**CuMix**) [57]\n- Heterogeneous Domain Generalization via Domain Mixup [[ICASSP 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.05448)] [[Code](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FMIXALL)] [128]\n- Open Domain Generalization with Domain-Augmented Meta-Learning [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FShu_Open_Domain_Generalization_with_Domain-Augmented_Meta-Learning_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fthuml\u002FOpenDG-DAML)] (**DAML**) [119]\n- Universal Cross-Domain Retrieval Generalizing Across Classes and Domains [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FPaul_Universal_Cross-Domain_Retrieval_Generalizing_Across_Classes_and_Domains_ICCV_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fmvp18\u002FUCDR)] (**SnMpNet**) [150]\n\n\n## Federated Domain Generalization\n> Federated domain generalization assumes that source data is distributed and can not be fused for data privacy protection.\n\n- Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.06736)] [[Code](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCSAC)] (**CSAC**) [161]\n- FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FFedDG-ELCFS)] (**FedDG**) [147]\n- Collaborative Optimization and Aggregation for Decentralized Domain Generalization and Adaptation [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWu_Collaborative_Optimization_and_Aggregation_for_Decentralized_Domain_Generalization_and_Adaptation_ICCV_2021_paper.pdf)] (**COPDA**) [159]\n\n## Source-free Domain Generalization\n> Source-free domain generalization aims to improve model's generalization capability to arbitrary unseen domains without exploiting any source domain data.\n\n- PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization [[ICCV 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15199)] [[Project Page](https:\u002F\u002FPromptStyler.github.io\u002F)] (**PromptStyler**) [231]\n\n## Applications\n### Person Re-Identification\n- Deep Domain-Adversarial Image Generation for Domain Generalisation [[AAAI 2020](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fdownload\u002F7003\u002F6857)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)]\n- Learning to Generate Novel Domains for Domain Generalization [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304)] [[Code](https:\u002F\u002Fgithub.com\u002Fmousecpn\u002FL2A-OT)] (**L2A-OT**, **Digits-DG dataset**) [28]\n- Learning Generalisable Omni-Scale Representations for Person Re-Identification [[TPAMI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.06827)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fdeep-person-reid)] [114]\n- Multi-Domain Adversarial Feature Generalization for Person Re-Identification [[TIP 2021](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F83\u002F9263394\u002F09311771.pdf)] (**MMFA-AAE**) [144]\n- Domain Generalization with Mixstyle [[ICLR 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2104.02008)] [[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fmixstyle-release)] (**MixStyle**) [56]\n- Learning to Generalize Unseen Domains via Memory-based Multi-Source Meta-Learning for Person Re-Identification [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhao_Learning_to_Generalize_Unseen_Domains_via_Memory-based_Multi-Source_Meta-Learning_for_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FHeliosZhao\u002FM3L)] (**M3L**) [12]\n- Meta Batch-Instance Normalization for Generalizable Person Re-Identification [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_Meta_Batch-Instance_Normalization_for_Generalizable_Person_Re-Identification_CVPR_2021_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Fbismex\u002FMetaBIN)] (**MetaBIN**) [13]\n- Generalizable Person Re-identification with Relevance-aware Mixture of Experts [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.pdf)] (**RaMoE**) [187]\n- TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F0f49c89d1e7298bb9930789c8ed59d48-Paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FShengcaiLiao\u002FQAConv)] (**TransMatcher**) [208]\n\n### Face Recognition & Anti-Spoofing\n- Multi-Adversarial Discriminative Deep Domain Generalization for Face Presentation Attack Detection [[CVPR 2019](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShao_Multi-Adversarial_Discriminative_Deep_Domain_Generalization_for_Face_Presentation_Attack_Detection_CVPR_2019_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Frshaojimmy\u002FCVPR2019-MADDoG)] (**MADDG**) [78]\n- Towards Universal Representation Learning for Deep Face Recognition [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FShi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002FMatyushinMA\u002Funi_rep_deep_faces)] [22]\n- Cross-Domain Face Presentation Attack Detection via Multi-Domain Disentangled Representation Learning [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_Cross-Domain_Face_Presentation_Attack_Detection_via_Multi-Domain_Disentangled_Representation_Learning_CVPR_2020_paper.pdf)] [106]\n- Single-Side Domain Generalization for Face Anti-Spoofing [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)] [[Code](https:\u002F\u002Fgithub.com\u002Ftaylover-pei\u002FSSDG-CVPR2020)] (**SSDG**) [79]\n\n## Related Topics\n### Life-Long Learning\n- Sequential Learning for Domain Generalization [[ECCV workshop 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.01377)] (**S-MLDG**) [14]\n- Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FVolpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] (**Meta-DR**) [153]\n\n# Publications\n\n| Top Conference  |  Papers  |\n|  ----  | ----  |\n|  before 2014  |  **CVPR:** [8], [11]; **ICCV:** [16], [41]; **NeurIPS:** [31], [113]; **ECCV:** [32], [87], [103]; **ICML:** [65]  |\n|  2015  |  **CVPR:** [89]; **ICML:** [30]; **ICCV:** [6], [46], [88]  |\n|  2016  |  **CVPR:** [42], [44], [120]; IJCAI: [66]; **ECCV:** [43], [47], [229]  |\n|  2017  |  **CVPR:** [20]; **ICCV:** [2], [71]; **NeurIPS:** [38]  |\n|  2018  |  **ICLR:** [1], [68]; **ICLR:** [53]; **CVPR:** [76]; **ECCV:** [45], [77]; **NeurIPS:** [4], [25]  |\n|  2019  |  **ICLR:** [35], [37]; **CVPR:** [78], [98]; **ICML:** [5], [107], [110]; **ICCV:** [7], [21], [33], [62], [63]; **NeurIPS:** [18]  |\n|  2020  |  **ICLR:** [55], [83], [218]; **ICLR:** [126]; **CVPR:** [22], [27], [79], [106]; **ICML:** [105]; **ECCV:** [14], [15], [28], [57], [64], [94], [99], [104]; **NeurIPS:** [75], [86], [112], [181]  |\n|  2021  |  **ICLR:** [19], [56], [59], [134], [175], [196]; **ICLR:** [139], [171], [221]; **CVPR:** [12], [13], [115], [116], [117], [118], [119], [132], [141], [147], [153], [160], [168], [187], [193]; IJCAI: [155], [195], [230]; **ICML:** [73], [190], [217]; **ICCV:** [129], [130], [133], [135], [138], [142], [143], [148], [149], [150], [158], [159], [194]; **MM:**  [131], [137], [146], [157]; **NeurIPS:** [136], [145], [152], [154], [198], [199], [200], [201], [202], [203], [204], [205], [206], [207], [208], [228], [225]  |\n|  2022  |  **AAAI:** [140]; **ICLR:** [213], [224]; **CVPR:** [69], [182], [214]; **ICML**: [173]; **MM:** [211]  |\n|  2023  |  **WACV:** [215]; **ICLR:** [223]; **ICCV:** [231]; **KDD:** [233] |\n|  2024  |  **WACV:** [234] |\n|  2025  |  **WACV:** [232] |\n\n| Top Journal  |  Papers  |\n|  ----  | ----  |\n|  before 2017  | **IJCV:** [9], [10]; **JMLR:** [226]  |\n|  2017  |  **TPAMI:** [67]; **TIP:** [91]  |\n|  2021|  **TIP:** [34], [144]; **TPAMI:**  [101], [114], [191], [197]; **JMLR:** [188]  |\n|  2022 | **TMLR:** [209], [210]; **IJCV:** [162] |\n\n| arXiv  |  Papers  |\n|  ----  | ----  |\n|  before 2014  |  [40]  |\n|  2017  |  [36], [52]  |\n|  2018  |  [166]  |\n|  2019  |  [81], [123], [165], [169]  |\n|  2020  |  [60], [82], [96], [102], [127], [189], [227]  |\n|  2021  |  [3], [54], [58], [151], [156], [161], [163], [170], [174], [176], [178], [179], [184], [192], [219], [222]  |\n|  2022  |  [183], [212], [216], [220]  |\n\n|  Else  |  Papers  |\n|  ----  |  ----  |\n|  before 2018  |  [29], [39], [48], [49], [50], [51], [90], [92], [97], [109], [111], [121], [122]  |\n|  2019  |  [26], [72], [84], [167]  |\n|  2020  |  [17], [23], [24], [61], [70], [74], [80], [85], [93], [95], [100], [124], [125], [128], [164]  |\n|  2021  |  [108], [172], [177], [180], [185]  |\n|  2022  |  [186]  |\n\n# Datasets\n> Evaluations on the following datasets often follow leave-one-domain-out protocol: randomly choose one domain to hold out as the target domain, while the others are used as the  source domain(s).\n>\n| Datasets (download link) | Description | Related papers |\n| :---- | :----: | :----: |\n| **Colored MNIST** [[165]](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf) | Handwritten digit recognition; 3 domains: {0.1, 0.3, 0.9}; 70,000 samples of dimension (2, 28, 28); 2 classes | [82], [138], [140], [149], [152], [154], [165], [171], [173], [190], [200], [202], [214], [216], [217], [219], [220], [222], [224], [234] |\n| **Rotated MNIST** [[6]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FGhifary_Domain_Generalization_for_ICCV_2015_paper.pdf) ([original](https:\u002F\u002Fgithub.com\u002FEmma0118\u002Fmate)) | Handwritten digit recognition; 6 domains with rotated degree: {0, 15, 30, 45, 60, 75}; 7,000 samples of dimension (1, 28, 28); 10 classes | [5], [6], [15], [35], [53], [55], [63], [71], [73], [74], [76], [77], [86], [90], [105], [107], [138], [140], [170], [173], [202], [204], [206], [216], [222], [224] |\n| **Digits-DG** [[28]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304) | Handwritten digit recognition; 4 domains: {MNIST [[29]](http:\u002F\u002Flushuangning.oss-cn-beijing.aliyuncs.com\u002FCNN%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97\u002FGradient-Based_Learning_Applied_to_Document_Recognition.pdf), MNIST-M [[30](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fganin15.pdf)], SVHN [[31](https:\u002F\u002Fresearch.google\u002Fpubs\u002Fpub37648.pdf)], SYN [[30](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fganin15.pdf)]}; 24,000 samples; 10 classes | [21], [25], [27], [28], [34], [35], [55], [59], [63], [94], [98], [116], [118], [130], [141], [142], [146], [151], [153], [157], [158], [159], [160], [166], [168], [179], [189], [203], [209], [210], [232] ,[233] |\n| **VLCS** [[16]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FFang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Fuc?id=1skwblH1_okBwxWxmRsp9_qi15hyPpxg8); or [original](https:\u002F\u002Fwww.mediafire.com\u002Ffile\u002F7yv132lgn1v267r\u002Fvlcs.tar.gz\u002Ffile)) | Object recognition; 4 domains: {Caltech [[8]](http:\u002F\u002Fwww.vision.caltech.edu\u002Fpublications\u002FFei-FeiCompVIsImageU2007.pdf), LabelMe [[9]](https:\u002F\u002Fidp.springer.com\u002Fauthorize\u002Fcasa?redirect_uri=https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs11263-007-0090-8.pdf&casa_token=n3w4Sen-huAAAAAA:sJY2dHreDGe2V4KE9jDehftM1W-Sn1z8bqeF_WK8Q9t4B0dFk5OXEAlIP7VYnr8UfiWLAOPG7dK0ZveYWs8), PASCAL [[10]](https:\u002F\u002Fidp.springer.com\u002Fauthorize\u002Fcasa?redirect_uri=https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs11263-009-0275-4.pdf&casa_token=Zb6LfMuhy_sAAAAA:Sqk_aoTWdXx37FQjUFaZN9ZMQxrUhqO2S_HbOO2a9BKtejW7CMekg-3PDVw6Yjw7BZqihyjP0D_Y6H2msBo), SUN [[11]](https:\u002F\u002Fdspace.mit.edu\u002Fbitstream\u002Fhandle\u002F1721.1\u002F60690\u002FOliva_SUN%20database.pdf?sequence=1&isAllowed=y)}; 10,729 samples of dimension (3, 224, 224); 5 classes; about 3.6 GB | [2], [6], [7], [14], [15], [18], [60], [61], [64], [67], [68], [70], [71], [74], [76], [77], [81], [83], [86], [91], [98], [99], [101], [102], [103], [117], [118], [126], [127], [131], [132], [136], [138], [140], [142], [145], [146], [148], [149], [161], [170], [173], [174], [184], [190], [195], [199], [201], [202], [203], [209], [216], [217], [222], [223], [224], [231], [233], [234] |\n| **Office31+Caltech** [[32]](https:\u002F\u002Flinkspringer.53yu.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-642-15561-1_16.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F14OIlzWFmi5455AjeBZLak2Ku-cFUrfEo\u002Fview)) | Object recognition; 4 domains: {Amazon, Webcam, DSLR, Caltech}; 4,652 samples in 31 classes (office31) or 2,533 samples in 10 classes (office31+caltech); 51 MB | [6], [35], [67], [68], [70], [71], [80], [91], [96], [119], [131], [167] |\n| **OfficeHome** [[20]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FVenkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Fuc?id=1uY0pj7oFsjMxRwaD3Sxy0jgel0fsYXLC); or [original](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F0B81rNlvomiwed0V1YUxQdC1uOTg\u002Fview?resourcekey=0-2SNWq0CDAuWOBRRBL7ZZsw)) | Object recognition; 4 domains: {Art, Clipart, Product, Real World}; 15,588 samples of dimension (3, 224, 224); 65 classes; 1.1 GB | [19], [54], [28], [34], [55], [58], [60], [61], [64], [80], [92], [94], [98], [101], [118], [126], [130], [131], [132], [133], [137], [138], [140], [146], [148], [156], [159], [160], [162], [163], [167], [173], [174], [178], [179], [182], [184], [189], [190], [199], [201], [202], [203], [206], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230], [231], [233], [234] |\n| **PACS** [[2]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Deeper_Broader_and_ICCV_2017_paper.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Fuc?id=1JFr8f805nMUelQWWmfnJR3y4_SYoN5Pd); or [original](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F0B6x7gtvErXgfUU1WcGY5SzdwZVk?resourcekey=0-2fvpQY_QSyJf2uIECzqPuQ)) | Object recognition; 4 domains: {photo, art_painting, cartoon, sketch}; 9,991 samples of dimension (3, 224, 224); 7 classes; 174 MB | [1], [2], [4], [5], [14], [15], [18], [19], [34], [54], [28], [35], [55], [56], [57], [58], [59], [60], [61], [64], [69], [73], [77], [80], [81], [82], [83], [84], [86], [90], [92], [94], [96], [98], [99], [101], [102], [104], [105], [116], [117], [118], [127], [129], [130], [131], [132], [136], [137], [138], [139], [140], [142], [145], [146], [148], [149], [153], [156], [157], [158], [159], [160], [161], [162], [163], [167], [170], [171], [173], [174], [178], [179], [180], [182], [184], [189], [190], [195], [199], [200], [201], [202], [203], [206], [209], [210], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230], [231], [232], [233], [234] |\n| **DomainNet** [[33](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPeng_Moment_Matching_for_Multi-Source_Domain_Adaptation_ICCV_2019_paper.pdf)] ([clipart](http:\u002F\u002Fcsr.bu.edu\u002Fftp\u002Fvisda\u002F2019\u002Fmulti-source\u002Fgroundtruth\u002Fclipart.zip), [infograph](http:\u002F\u002Fcsr.bu.edu\u002Fftp\u002Fvisda\u002F2019\u002Fmulti-source\u002Finfograph.zip), [painting](http:\u002F\u002Fcsr.bu.edu\u002Fftp\u002Fvisda\u002F2019\u002Fmulti-source\u002Fgroundtruth\u002Fpainting.zip), [quick-draw](http:\u002F\u002Fcsr.bu.edu\u002Fftp\u002Fvisda\u002F2019\u002Fmulti-source\u002Fquickdraw.zip), [real](http:\u002F\u002Fcsr.bu.edu\u002Fftp\u002Fvisda\u002F2019\u002Fmulti-source\u002Freal.zip), and [sketch](http:\u002F\u002Fcsr.bu.edu\u002Fftp\u002Fvisda\u002F2019\u002Fmulti-source\u002Fsketch.zip); or [original](http:\u002F\u002Fai.bu.edu\u002FM3SDA\u002F)) | Object recognition; 6 domains: {clipart, infograph, painting, quick-draw, real, sketch}; 586,575 samples of dimension (3, 224, 224); 345 classes; 1.2 GB + 4.0 GB + 3.4 GB + 439 MB + 5.6 GB + 2.5 GB | [34], [57], [69], [104], [119], [130], [131], [132], [133], [138], [140], [150], [173], [182], [189], [201], [202], [203], [216], [222], [223], [224], [230], [231], [234] |\n| **mini-DomainNet** [[34]](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.07325) | Object recognition; a smaller and less noisy version of DomainNet; 4 domains: {clipart, painting, real, sketch}; 140,006 samples | [34], [130], [156], [157], [210], [232] |\n**ImageNet-Sketch** [[35]](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1903.06256) | Object recognition; 2 domains: {real, sketch}; 50,000 samples | [64] |\n**VisDA-17** [[36](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1710.06924)] | Object recognition; 3 domains of synthetic-to-real generalization; 280,157 samples | [119], [182] |\n**CIFAR-10-C** \u002F **CIFAR-100-C** \u002F **ImageNet-C** [[37]](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1903.12261.pdf?ref=https:\u002F\u002Fgithubhelp.com) ([original](https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Frobustness\u002F)) | Object recognition; the test data are damaged by 15 corruptions (each with 5 intensity levels) drawn from 4 categories (noise, blur, weather, and digital); 60,000\u002F60,000\u002F1.3M samples | [27], [69], [74], [116], [141], [151], [168] |\n| **Visual Decathlon (VD)** [[38]](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Ffile\u002Fe7b24b112a44fdd9ee93bdf998c6ca0e-Paper.pdf) | Object\u002Faction\u002Fhandwritten\u002Fdigit recognition; 10 domains from the combination of 10 datasets; 1,659,142 samples | [5], [7], [128] |\n**IXMAS** [[39]](https:\u002F\u002Fhal.inria.fr\u002Fdocs\u002F00\u002F54\u002F46\u002F29\u002FPDF\u002Fcviu_motion_history_volumes.pdf) | Action recognition; 5 domains with 5 camera views, 10 subjects, and 5 actions; 1,650 samples | [7], [14], [67], [76] |\n**SYNTHIA** [[42]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FRos_The_SYNTHIA_Dataset_CVPR_2016_paper.pdf) | Semantic segmentation; 15 domains with 4 locations and 5 weather conditions; 2,700 samples | [27], [62], [115], [141], [151], [185], [193]  |\n**GTA5-Cityscapes** [[43]](https:\u002F\u002Flinkspringer.53yu.com\u002Fchapter\u002F10.1007\u002F978-3-319-46475-6_7), [[44]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FCordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf) | Semantic segmentation; 2 domains of synthetic-to-real generalization; 29,966 samples | [62], [115], [185], [193]  |\n**Cityscapes-ACDC** [[44]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FCordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf) ([original](https:\u002F\u002Facdc.vision.ee.ethz.ch\u002Foverview))  | Semantic segmentation; real life domain shifts, ACDC contains four different weather conditions: rain, fog, snow, night | [215]  |\n**Terra Incognita (TerraInc)** [[45]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FBeery_Recognition_in_Terra_ECCV_2018_paper.pdf) ([1](https:\u002F\u002Flilablobssc.blob.core.windows.net\u002Fcaltechcameratraps\u002Feccv_18_all_images_sm.tar.gz) and [2](https:\u002F\u002Flilablobssc.blob.core.windows.net\u002Fcaltechcameratraps\u002Flabels\u002Fcaltech_camera_traps.json.zip); or [original](https:\u002F\u002Flila.science\u002Fdatasets\u002Fcaltech-camera-traps)) | Animal classification; 4 domains captured at different geographical locations: {L100, L38, L43, L46}; 24,788 samples of dimension (3, 224, 224); 10 classes; 6.0 GB + 8.6 MB | [132], [136], [138], [140], [173], [201], [202], [207], [212], [214], [216], [222], [223], [224], [231], [234]  |\n**Market-Duke** [[46]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FZheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf), [[47]](https:\u002F\u002Flinkspringer.53yu.com\u002Fchapter\u002F10.1007\u002F978-3-319-48881-3_2) | Person re-idetification; cross-dataset re-ID; heterogeneous DG with 2 domains; 69,079 samples | [12], [13], [28], [55], [56], [58], [114], [144], [187], [208]  |\n\u003C!-- **UCF-HMDB** [[40](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1212.0402.pdf?ref=https:\u002F\u002Fgithubhelp.com)], [[41](https:\u002F\u002Fdspace.mit.edu\u002Fbitstream\u002Fhandle\u002F1721.1\u002F69981\u002FPoggio-HMDB.pdf?sequence=1&isAllowed=y)] | Action recognition | 2 domains with 12 overlapping actions; 3809 samples |  | -->\n\u003C!-- **Face** [22] | >5M | 9 | Face recognition | Combination of 9 face datasets |  |\n**COMI** [[48](http:\u002F\u002Fwww.cbsr.ia.ac.cn\u002Fusers\u002Fjjyan\u002Fzhang-icb2012.pdf)], [49], [50], [[51](https:\u002F\u002Fdl.gi.de\u002Fbitstream\u002Fhandle\u002F20.500.12116\u002F18295\u002F183.pdf?sequence=1)] | 8500 | 4 | Face anti-spoofing | Combination of 4 face anti-spoofing datasets |  | -->\n\n# Libraries\n> We list the GitHub libraries of domain generalization (sorted by stars).\n\n- [DeepDG (jindongwang)](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG): Deep Domain Generalization Toolkit.\n- [Transfer Learning Library (thuml)](https:\u002F\u002Fgithub.com\u002Fthuml\u002FTransfer-Learning-Library) for Domain Adaptation, Task Adaptation, and Domain Generalization.\n- [DomainBed (facebookresearch)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed) [134] is a suite to test domain generalization algorithms.\n- [Dassl (KaiyangZhou)](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch): A PyTorch toolbox for domain adaptation, semi-supervised learning, and domain generalization.\n\n# Lectures & Tutorials & Talks\n- **(Talk 2021)** Generalizing to Unseen Domains: A Survey on Domain Generalization [155]. [[Video](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1ro4y1S7dd\u002F)] [[Slides](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002FDGSurvey-ppt.pdf)] *(Jindong Wang (MSRA), in Chinese)*\n\n# Other Resources\n- A collection of domain generalization papers organized by  [amber0309](https:\u002F\u002Fgithub.com\u002Famber0309\u002FDomain-generalization).\n- A collection of domain generalization papers organized by [jindongwang](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#domain-generalization).\n- A collection of papers on domain generalization, domain adaptation, causality, robustness, prompt, optimization, generative model, etc, organized by [yfzhang114](https:\u002F\u002Fgithub.com\u002Fyfzhang114\u002FGeneralization-Causality).\n- Adaptation and Generalization Across Domains in Visual Recognition with Deep Neural Networks [[PhD 2020, Kaiyang Zhou (University of Surrey)](https:\u002F\u002Fopenresearch.surrey.ac.uk\u002Fesploro\u002Foutputs\u002Fdoctoral\u002FAdaptation-and-Generalization-Across-Domains-in\u002F99513024202346)] [164]\n\n# Contributing & Contact\nFeel free to contribute to our repository.\n\n- If you would like to *correct mistakes*, please do it directly;\n- If you would like to *add\u002Fupdate papers*, please finish the following tasks (if necessary):\n    1. Find the max index (current max: **[234]**, not used: none), and create a new one.\n    2. Update [Publications](#publications).\n    3. Update [Papers](#papers).\n    4. Update [Datasets](#datasets).\n- If you have any *questions or advice*, please contact us by email (yuanjk@zju.edu.cn) or GitHub issues.\n\nThank you for your cooperation and contributions!\n\n# Acknowledgements\nThe designed hierarchy of the [Contents](#contents) is mainly based on [awesome-domain-adaptation](https:\u002F\u002Fgithub.com\u002Fzhaoxin94\u002Fawesome-domain-adaptation#unsupervised-da).\n- We refer to [3] to design the [Contents](#contents) and the table of [Datasets](#datasets).\n","# 令人惊叹的领域泛化\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n本仓库汇集了关于**领域泛化**的优秀资源，包括论文、代码等。\n\n如果您希望为本仓库贡献力量或有任何问题\u002F建议，请参阅[贡献与联系](#contributing--contact)。\n\n# 目录\n- [令人惊叹的领域泛化](#awesome-domain-generalization)\n- [目录](#contents)\n- [论文](#papers)\n  - [综述](#survey)\n  - [理论与分析](#theory--analysis)\n  - [数据集](#dataset)\n  - [领域泛化](#domain-generalization)\n    - [基于领域对齐的方法](#domain-alignment-based-methods)\n    - [基于数据增强的方法](#data-augmentation-based-methods)\n    - [基于元学习的方法](#meta-learning-based-methods)\n    - [基于集成学习的方法](#ensemble-learning-based-methods)\n    - [基于自监督学习的方法](#self-supervised-learning-based-methods)\n    - [基于解耦表征学习的方法](#disentangled-representation-learning-based-methods)\n    - [基于正则化的方法](#regularization-based-methods)\n    - [基于归一化的方法](#normalization-based-methods)\n    - [基于信息论的方法](#information-based-methods)\n    - [基于因果关系的方法](#causality-based-methods)\n    - [基于推理时的方法](#inference-time-based-methods)\n    - [基于神经架构搜索的方法](#neural-architecture-search-based-methods)\n  - [单源领域泛化](#single-domain-generalization)\n  - [半\u002F弱\u002F无监督领域泛化](#semiweakun-supervised-domain-generalization)\n  - [开放\u002F异构领域泛化](#openheterogeneous-domain-generalization)\n  - [联邦领域泛化](#federated-domain-generalization)\n  - [无源领域泛化](#source-free-domain-generalization)\n  - [应用](#applications)\n    - [行人重识别](#person-re-identification)\n    - [人脸识别与防伪](#face-recognition--anti-spoofing)\n  - [相关主题](#related-topics)\n    - [终身学习](#life-long-learning)\n- [出版物](#publications)\n- [数据集](#datasets)\n- [库](#libraries)\n- [讲座、教程与报告](#lectures--tutorials--talks)\n- [其他资源](#other-resources)\n- [贡献与联系](#contributing--contact)\n- [致谢](#acknowledgements)\n\n## 论文\n> 我们按年份顺序，并从期刊到会议列出论文、实现代码（非官方代码以*标记）等。请注意，某些论文可能同时属于多个类别。\n\n### 综述\n- 针对未见领域的泛化：领域泛化的综述 [[IJCAI 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2103.03097)] [[幻灯片](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002FDGSurvey-ppt.pdf)] [155]\n- 视觉中的领域泛化：综述 [[TPAMI 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02503)] [3]\n\n### 理论与分析\n> 我们列出了那些提供启发性理论分析或对领域泛化进行广泛实证研究的论文。\n\n- 多分类领域泛化的泛化误差界 [[arXiv 2019](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.10392)] [123]\n- 基于边缘迁移学习的领域泛化 [[JMLR 2021](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume22\u002F17-679\u002F17-679.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Faniketde\u002FDomainGeneralizationMarginal)] (**MTL**) [188]\n- 不变风险最小化的风险 [[ICLR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.05761)] [196]\n- 寻找失落的领域泛化 [[ICLR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.01434.pdf?fbclid=IwAR1YkUXkIhC6fhr6eI687zBXo_W2tTjjTAFnyjEWvmq4gQKon_4pIDbTnQ4)] [134]\n- 强健性的多重面相：分布外泛化的批判性分析 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FHendrycks_The_Many_Faces_of_Robustness_A_Critical_Analysis_of_Out-of-Distribution_ICCV_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fhendrycks\u002Fimagenet-r)] [135]\n- 使用经验风险最小化器进行领域泛化的实证研究 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fecf9902e0f61677c8de25ae60b654669-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdomainbed_measures)] [198]\n- 向分布外泛化的理论框架迈进 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc5c1cb0bebd56ae38817b251ad72bedb-Paper.pdf)] [199]\n- 核回归中的分布外泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F691dcb1d65f31967a874d18383b9da75-Paper.pdf)] [205]\n- 定量评估并提升领域泛化的可迁移性 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F5adaacd4531b78ff8b5cedfe3f4d5212-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FGordon-Guojun-Zhang\u002FTransferability-NeurIPS2021)] (**Transfer**) [206]\n- OoD-Bench：量化并理解分布外泛化的两个维度 [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FYe_OoD-Bench_Quantifying_and_Understanding_Two_Dimensions_of_Out-of-Distribution_Generalization_CVPR_2022_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fynysjtu\u002Food_bench)] (**OoD-Bench**) [214]\n- 为单源领域泛化的验证与训练设计分布偏移 [[WACV 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19774)] [[代码](https:\u002F\u002Fgithub.com\u002FNikosEfth\u002Fcrafting-shifts)] [232]\n\n## 数据集\n- 基于运动历史体积的自由视点动作识别 [[CVIU 2006](https:\u002F\u002Fhal.inria.fr\u002Fdocs\u002F00\u002F54\u002F46\u002F29\u002FPDF\u002Fcviu_motion_history_volumes.pdf)] (**IXMAS数据集**) [39]\n- 用于无监督域适应的测地流核 [[CVPR 2012](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FFang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf)] (**Office-Caltech数据集**) [32]\n- 无偏度度量学习：利用多数据集和网络图像缓解偏差 [[ICCV 2013](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FFang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf)] (**VLCS数据集**) [16]\n- 基于多任务自编码器的目标识别域泛化 [[ICCV 2015](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FGhifary_Domain_Generalization_for_ICCV_2015_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FEmma0118\u002Fmate)] (**MTAE**, **旋转MNIST数据集**) [6]\n- 可扩展的人体重识别：一个基准测试 [[ICCV 2015](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FZheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf)] (**Market-1501数据集**) [46]\n- Cityscapes数据集：用于语义城市场景理解 [[CVPR 2016](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FCordts_The_Cityscapes_Dataset_CVPR_2016_paper.pdf)] (**Cityscapes数据集**) [44]\n- SYNTHIA数据集：用于城市场景语义分割的大型合成图像集合 [[CVPR 2016](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FRos_The_SYNTHIA_Dataset_CVPR_2016_paper.pdf)] (**SYNTHIA数据集**) [42]\n- 为数据而玩：来自电脑游戏的真实标签 [[ECCV 2016](https:\u002F\u002Flinkspringer.53yu.com\u002Fchapter\u002F10.1007\u002F978-3-319-46475-6_7)] (**GTA5数据集**) [43]\n- 多目标、多摄像头跟踪的性能指标及数据集 [[ECCV 2016](https:\u002F\u002Flinkspringer.53yu.com\u002Fchapter\u002F10.1007\u002F978-3-319-48881-3_2)] (**Duke数据集**) [47]\n- VisDA：视觉域适应挑战赛 [[arXiv 2017](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.06924.pdf)] (**VisDA-17数据集**) [36]\n- 用于无监督域适应的深度哈希网络 [[CVPR 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FVenkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf)] (**OfficeHome数据集**) [20]\n- 更深入、更广泛且更具艺术性的域泛化 [[ICCV 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Deeper_Broader_and_ICCV_2017_paper.pdf)] [[代码](https:\u002F\u002Fdali-dl.github.io\u002Fproject_iccv2017.html)] (**PACS数据集**) [2]\n- 使用残差适配器学习多个视觉域 [[NeurIPS 2017](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Ffile\u002Fe7b24b112a44fdd9ee93bdf998c6ca0e-Paper.pdf)] (**Visual Decathlon (VD)数据集**) [38]\n- 未知领域的识别 [[ECCV 2018](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FBeery_Recognition_in_Terra_ECCV_2018_paper.pdf)] (**Terra Incognita数据集**) [45]\n- 不变风险最小化 [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf;)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)] (**IRM**, **彩色MNIST数据集**) [165]\n- 通过投影去除表面统计特征来学习鲁棒表示 [[ICLR 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1903.06256)] [[代码](https:\u002F\u002Fgithub.com\u002FHaohanWang\u002FHEX)] (**HEX**, **ImageNet-Sketch数据集**) [35]\n- 基于常见损坏和扰动的神经网络鲁棒性基准测试 [[ICLR 2019](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.12261.pdf?ref=https:\u002F\u002Fgithubhelp.com)] (**CIFAR-10-C \u002F CIFAR-100-C \u002F ImageNet-C数据集**) [37]\n- 多源域适应中的矩匹配 [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FPeng_Moment_Matching_for_Multi-Source_Domain_Adaptation_ICCV_2019_paper.pdf)] [[代码](http:\u002F\u002Fai.bu.edu\u002FM3SDA\u002F)] (**DomainNet数据集**) [33]\n- 学习生成新颖域以进行域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304)] [[代码](https:\u002F\u002Fgithub.com\u002Fmousecpn\u002FL2A-OT)] (**L2A-OT**, **Digits-DG数据集**) [28]\n- 域自适应集成学习 [[TIP 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.07325)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)] (**mini-DomainNet数据集**) [34]\n- 向非IID图像分类迈进：一个数据集与基线 [[PR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1906.02899)] (**NICO数据集**) [108]\n- NICO++：迈向更好的域泛化基准测试 [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.08040)] (**NICO++数据集**) [183]\n- MetaShift：用于评估上下文分布漂移和训练冲突的数据集集合 [[ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.06523)] [[代码](https:\u002F\u002Fgithub.com\u002FWeixin-Liang\u002FMetaShift)] (**MetaShift数据集**) [213]\n\n## 域泛化\n> 为了解决数据集\u002F域漂移问题 [[109]](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320311002901?casa_token=qIu5tyPmlgQAAAAA:IDLcYED3jzUGsissKY_EuDLQTMCkGQrEWoAq542Cbcd4FKQinvp78Wgb6jhRiSLqGdQCvcifwprz)) [[110](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Frecht19a\u002Frecht19a.pdf))] [[111](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs10994-009-5152-4.pdf))] [[112]](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fd8330f857a17c53d217014ee776bfd50-Paper.pdf), 域泛化 [[113](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2011\u002Ffile\u002Fb571ecea16a9824023ee1af16897a582-Paper.pdf)]旨在从源域中学习模型，并使其能够很好地泛化到未知的目标域。\n\n### 域对齐方法\n> 域对齐方法旨在最小化源域之间的差异，从而学习域不变的表示。\n\n- 基于不变特征表示的领域泛化 [[ICML 2013](http:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fmuandet13.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fkrikamol\u002Fdg-dica)] (**DICA**) [65]\n- 神经网络的领域对抗训练 [[JMLR 2016](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume17\u002F15-239\u002F15-239.pdf)] [[代码](https:\u002F\u002Fgraal.ift.ulaval.ca\u002Fdann\u002F)] (**DANN**) [226]\n- 学习属性等价于多源领域泛化 [[CVPR 2016](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FGan_Learning_Attributes_Equals_CVPR_2016_paper.pdf)] (**UDICA**) [120]\n- 通过强制分布不变性实现稳健的领域泛化 [[IJCAI 2016](https:\u002F\u002Feprints.qut.edu.au\u002F115382\u002F15\u002FErfani2016IJCAI.pdf)] (**ESRand**) [66]\n- 散射成分分析：领域适应与领域泛化的统一框架 [[TPAMI 2017](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1510.04373)] (**SCA**) [67]\n- 统一的深度监督式领域适应与泛化 [[ICCV 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FMotiian_Unified_Deep_Supervised_ICCV_2017_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fsamotiian\u002FCCSA)] (**CCSA**) [71]\n- 超越领域适应：通过通用非体积保体型模型封装未见领域 [[arXiv 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1812.03407)] (**UNVP**) [166]\n- 基于条件不变表示的领域泛化 [[AAAI 2018](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F11682\u002F11541)] (**CIDG**) [68]\n- 基于对抗特征学习的领域泛化 [[CVPR 2018](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Domain_Generalization_With_CVPR_2018_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FYuqiCui\u002FMMD_AAE)] (**MMD-AAE**) [76]\n- 基于条件不变对抗网络的深度领域泛化 [[ECCV 2018](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FYa_Li_Deep_Domain_Generalization_ECCV_2018_paper.pdf)] (**CIDDG, CDANN**) [77]\n- 通过分布匹配泛化到未见领域 [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1911.00804)] [[代码](https:\u002F\u002Fgithub.com\u002Fbelaalb\u002FG2DM)] (**G2DM**) [81]\n- 基于领域深度泛化的未见领域图像对齐 [[arXiv 2019](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.12028)] (**DeGIA**) [169]\n- 面部呈现攻击检测的多对抗判别式深度领域泛化 [[CVPR 2019](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShao_Multi-Adversarial_Discriminative_Deep_Domain_Generalization_for_Face_Presentation_Attack_Detection_CVPR_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Frshaojimmy\u002FCVPR2019-MADDoG)] (**MADDG**) [78]\n- 在数据偏置和领域类别不平衡存在下的可泛化特征学习及其在皮肤病变分类中的应用 [[MICCAI 2019](https:\u002F\u002Fwww.cs.sfu.ca\u002F~hamarneh\u002Fecopy\u002Fmiccai2019d.pdf)] [72]\n- 基于模型无关语义特征学习的领域泛化 [[NeurIPS 2019](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F2974788b53f73e7950e8aa49f3a306db-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fbiomedia-mira\u002Fmasf)] (**MASF**) [18]\n- 带有精度约束的对抗不变特征学习用于领域泛化 [[ECMLPKDD 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1904.12543)] [[代码](https:\u002F\u002Fgithub.com\u002Fakuzeee\u002FAFLAC)] (**AFLAC**) [84]\n- 用于领域泛化和适应的特征对齐与恢复 [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.12009)] (**FAR**) [189]\n- 通过表征进行表征：基于对抗学习的不变表征进行领域泛化 [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2006.11478)] (**RVR**) [82]\n- 关联感知的对抗式领域适应与泛化 [[PR 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1911.12983)] [[代码](https:\u002F\u002Fgithub.com\u002Fmahfujur1\u002FCA-DA-DG)] (**CAADA**) [80]\n- 使用多种潜在领域的混合进行领域泛化 [[AAAI 2020](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F6846\u002F6700)] [[代码](https:\u002F\u002Fgithub.com\u002Fmil-tokyo\u002Fdg_mmld)] [83]\n- 面部防伪的单侧领域泛化 [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Ftaylover-pei\u002FSSDG-CVPR2020)] (**SSDG**) [79]\n- 基于MRI的扫描仪无关多发性硬化症病灶分割 [[ISBI 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1910.10035)] [85]\n- 尊重领域关系：用于领域泛化的假设不变性 [[ICPR 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2010.07591)] (**HIR**) [74]\n- 基于多领域判别分析的领域泛化 [[UAI 2020](http:\u002F\u002Fproceedings.mlr.press\u002Fv115\u002Fhu20a\u002Fhu20a.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Famber0309\u002FMultidomain-Discriminant-Analysis)] (**MDA**) [70]\n- 带有线性依赖正则化的医学影像分类领域泛化 [[NeurIPS 2020](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F201d7288b4c18a679e48b31c72c30ded-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FLDDG)] (**LDDG**) [75]\n- 基于熵正则化的领域泛化 [[NeurIPS 2020](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fb98249b38337c5088bbc660d8f872d6a-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fsshan-zhao\u002FDG_via_ER)] [86]\n- 迭代特征匹配：迈向具有对数环境的可证明领域泛化 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09913)] [192]\n- 现实世界中的半监督领域泛化：新基准与强基线 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.10221)] [179]\n- 用于分离领域泛化的协同语义聚合与校准 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.06736)] [[代码](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCSAC)] (**CSAC**) [161]\n- 用于行人再识别的多领域对抗特征泛化 [[TIP 2021](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F83\u002F9263394\u002F09311771.pdf)] (**MMFA-AAE**) [144]\n- 尺度不变的领域泛化图像复现检测 [[ICONIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.03496)] (**SADG**) [177]\n- 基于变分贝叶斯推断在条件与标签漂移下的领域泛化 [[IJCAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.10931)] (**VBCLS**) [195]\n- 基于因果匹配的领域泛化 [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fmahajan21b\u002Fmahajan21b.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Frobustdg)] (**MatchDG**) [73]\n- 通过推理时的标签保持目标投影实现对未见领域的泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FPandey_Generalization_on_Unseen_Domains_via_Inference-Time_Label-Preserving_Target_Projections_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fyys-Polaris\u002FInferenceTimeDG)] [118]\n- 用于单领域泛化的渐进式领域扩展网络 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- 在协变量漂移下的领域泛化置信度校准 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FGong_Confidence_Calibration_for_Domain_Generalization_Under_Covariate_Shift_ICCV_2021_paper.pdf)] [133]\n- 关于校准与域外泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F118bd558033a1016fcc82560c65cca5f-Paper.pdf)] [154]\n- 用于领域泛化的领域不变特征探索 [[TMLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.12020)] [[代码](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)] (**DIFEX**) [209]\n- 用于领域泛化的跨领域集成蒸馏 [[ECCV 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.14058)] (**XDED**) [94]\n- 基于风险分布匹配的领域泛化 [[WACV 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18598)] [[代码](https:\u002F\u002Fgithub.com\u002Fnktoan\u002Frisk-distribution-matching)] (**RDM**) [234]\n\n### 基于数据增强的方法\n> 基于数据增强的方法通过对原始数据进行扩充，并在生成的数据上训练模型，以提高模型的鲁棒性。\n\n- 基于原则性对抗训练的某些分布鲁棒性认证 [[arXiv 2017](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1710.10571.pdf])] [[代码](https:\u002F\u002Fgithub.com\u002Fduchi-lab\u002Fcertifiable-distributional-robustness)] [52]\n- 通过交叉梯度训练实现跨域泛化 [[ICLR 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1804.10745)] [[代码](https:\u002F\u002Fgithub.com\u002Fvihari\u002Fcrossgrad)] (**CrossGrad**) [53]\n- 通过对抗数据增强实现对未见域的泛化 [[NeurIPS 2018](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Ffile\u002F1d94108e907bb8311d8802b48fd54b4a-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fgeneralize-unseen-domains)] [25]\n- 在计算病理学中利用染色不变特征提升深度卷积神经网络的泛化能力 [[Frontiers in Bioengineering and Biotechnology 2019](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffbioe.2019.00198\u002Ffull?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Bioengineering_and_Biotechnology&id=474781)] [26]\n- 面向深度域泛化的多组件图像转换 [[WACV 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1812.08974)] [[代码](https:\u002F\u002Fgithub.com\u002Fmahfujur1\u002Fmit-DG)] [167]\n- 通过解拼图谜题实现域泛化 [[CVPR 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FCarlucci_Domain_Generalization_by_Solving_Jigsaw_Puzzles_CVPR_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Ffmcarlucci\u002FJigenDG)] (**JiGen**) [98]\n- 应对模型在图像变换集合上的分布漂移脆弱性 [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FVolpi_Addressing_Model_Vulnerability_to_Distributional_Shifts_Over_Image_Transformation_Sets_ICCV_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fricvolpi\u002Fdomain-shift-robustness)] [21]\n- 域随机化与金字塔一致性：无需目标域数据即可实现仿真到真实的泛化 [[ICCV 2019](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FYue_Domain_Randomization_and_Pyramid_Consistency_Simulation-to-Real_Generalization_Without_Accessing_Target_ICCV_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fxyyue\u002FDRPC)] [62]\n- 幻觉式生成无关图像以实现跨域泛化 [[ICCV workshop 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1808.01102)] [[代码](https:\u002F\u002Fgithub.com\u002Ffmcarlucci\u002FADAGE)] [63]\n- 利用Mixup训练改进无监督域适应 [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.00677)] [[代码*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**Mixup**) [227]\n- 提升基于卷积神经网络的CMR图像分割模型的泛化能力 [[Frontiers in Cardiovascular Medicine 2020](https:\u002F\u002Fwww.frontiersin.org\u002Farticles\u002F10.3389\u002Ffcvm.2020.00105\u002Ffull)] [24]\n- 通过深度堆叠变换将医学图像分割的深度学习模型泛化到未见域 [[TMI 2020](https:\u002F\u002Fwww.ncbi.nlm.nih.gov\u002Fpmc\u002Farticles\u002Fpmc7393676\u002F)] (**BigAug**) [23]\n- 面向域泛化的深度域对抗图像生成 [[AAAI 2020](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fdownload\u002F7003\u002F6857)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)] (**DDAIG**) [55]\n- 朝着面向深度人脸识别的通用表征学习迈进 [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FShi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FMatyushinMA\u002Funi_rep_deep_faces)] [22]\n- 基于域混合的异构域泛化 [[ICASSP 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.05448)] [[代码](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FMIXALL)] [128]\n- 学习生成新颖域以进行域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304)] [[代码](https:\u002F\u002Fgithub.com\u002Fmousecpn\u002FL2A-OT)] (**L2A-OT**, **Digits-DG数据集**) [28]\n- 结合外在与内在监督进行域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.09316)] [[代码](https:\u002F\u002Fgithub.com\u002Femma-sjwang\u002FEISNet)] (**EISNet**) [99]\n- 朝着识别未见域中的未见类别迈进 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.12256.pdf?ref=https:\u002F\u002Fgithubhelp.com)] [[代码](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002FCuMix)] (**CuMix**) [57]\n- 重新思考域泛化的基准方法 [[ICPR 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2101.09060)]\n- 多则优：一种用于域泛化的新型多视角框架 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.12329)] [184]\n- 基于随机风格匹配的半监督域泛化 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2106.00592)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fssdg-benchmark)] (**StyleMatch**) [54]\n- 更好的伪标签：面向半监督域泛化的联合域感知标签与双分类器 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)] [156]\n- 从单一源实现域外泛化：一种不确定性量化方法 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2108.02888)] [151]\n- 朝着面向域泛化的原则性解耦迈进 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.13839)] [[代码](https:\u002F\u002Fgithub.com\u002Fhlzhang109\u002FDDG)] (**DDG**) [170]\n- MixStyle神经网络用于域泛化与域适应 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2107.02053)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fmixstyle-release)] (**MixStyle**) [58]\n- VideoDG：将视频中的时序关系泛化到新域 [[TPAMI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.03716)] [[代码](https:\u002F\u002Fgithub.com\u002Fthuml\u002FVideoDG)] (**APN**) [197]\n- 基于边缘迁移学习的域泛化 [[JMLR 2021](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume22\u002F17-679\u002F17-679.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Faniketde\u002FDomainGeneralizationMarginal)] [188]\n- 域增强监督对比学习下的域泛化 [[AAAI学生摘要2021](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FSA-197.LeHS.pdf)] (**DASCL**) [139]\n- DecAug：通过分解特征表示和语义增强实现分布外泛化 [[AAAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.09382)] [[代码](https:\u002F\u002Fgithub.com\u002FHaoyueBaiZJU\u002FDecAug)] (**DecAug**) [171]\n- 基于Mixstyle的域泛化 [[ICLR 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2104.02008)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fmixstyle-release)] (**MixStyle**) [56]\n- 通过随机卷积实现稳健且可泛化的视觉表征学习 [[ICLR 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.13003)] [[代码](https:\u002F\u002Fgithub.com\u002Fwildphoton\u002FRandConv)] (**RC**) [59]\n- 学习如何学习单域泛化 [[CVPR 2020](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQiao_Learning_to_Learn_Single_Domain_Generalization_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FM-ADA)] (**M-ADA**) [27]\n- FSDR：面向域泛化的频域域随机化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FHuang_FSDR_Frequency_Space_Domain_Randomization_for_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjxhuang0508\u002FFSDR)] (**FSDR**) [115]\n- FedDG：基于连续频域中的情节式学习，在医学图像分割任务上实现联邦域泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FFedDG-ELCFS)] (**FedDG**) [147]\n- 不确定性引导的模型对未见域的泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FQiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FUMGUD)] [168]\n- 基于域随机化与元学习的视觉表征持续适应 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FVolpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] (**Meta-DR**) [153]\n- 一种基于傅里叶的域泛化框架 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXu_A_Fourier-Based_Framework_for_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FFACT)] (**FACT**) [160]\n- 基于域增强的元学习实现开放域泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FShu_Open_Domain_Generalization_with_Domain-Augmented_Meta-Learning_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fthuml\u002FOpenDG-DAML)] (**DAML**) [119]\n- 一种用于域泛化的简单特征增强 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FLi_A_Simple_Feature_Augmentation_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**SFA**) [142]\n- 跨类跨域的通用跨域检索 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FPaul_Universal_Cross-Domain_Retrieval_Generalizing_Across_Classes_and_Domains_ICCV_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmvp18\u002FUCDR)] (**SnMpNet**) [150]\n- 特征风格化与域感知对比学习用于域泛化 [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475271)] [137]\n- 面向域泛化的对抗式师生表征学习 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa2137a2ae8e39b5002a3f8909ecb88fe-Paper.pdf)] [203]\n- 基于模型的域泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa8f12d9486cbcc2fe0cfc5352011ad35-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Farobey1\u002Fmbdg)] (**MBDG**) [200]\n- 面向协变量漂移的最佳表征 [[ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.00057)] [[代码](https:\u002F\u002Fgithub.com\u002Fryoungj\u002Foptdom)] (**CAD**) [223]\n- 基于协作探索与泛化的标签高效域泛化 [[MM 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.03644)] [[代码](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCEG)] (**CEG**) [211]\n- 为单源域泛化中的验证和训练定制分布漂移 [[WACV 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19774)] [[代码](https:\u002F\u002Fgithub.com\u002FNikosEfth\u002Fcrafting-shifts)] [232]\n\n### 基于元学习的方法\n> 基于元学习的方法在元训练集上训练模型，并在元测试集上提升其性能，以增强模型的域外泛化能力。\n\n- 学习泛化：用于领域泛化的元学习 [[AAAI 2018](https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002FviewFile\u002F16067\u002F16547)] [[代码](https:\u002F\u002Fgithub.com\u002FHAHA-DL\u002FMLDG)] (**MLDG**) [1]\n- MetaReg：利用元正则化实现领域泛化 [[NeurIPS 2018](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Ffile\u002F647bba344396e7c8170902bcf2e15551-Paper.pdf)] [[代码*](https:\u002F\u002Fgithub.com\u002Felliotbeck\u002FMetaReg_PyTorch)] (**MetaReg**) [4]\n- 用于异质领域泛化的特征批评者网络 [[ICML 2019](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fli19l\u002Fli19l.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fliyiying\u002FFeature_Critic)] (**Feature-Critic**) [5]\n- 领域泛化的剧集式训练 [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLi_Episodic_Training_for_Domain_Generalization_ICCV_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FHAHA-DL\u002FEpisodic-DG)] (**Epi-FCR**) [7]\n- 通过模型无关的语义特征学习实现领域泛化 [[NeurIPS 2019](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F2974788b53f73e7950e8aa49f3a306db-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fbiomedia-mira\u002Fmasf)] (**MASF**) [18]\n- 基于半监督元学习的领域泛化 [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.12658)] [[代码](https:\u002F\u002Fgithub.com\u002Fhosseinshn\u002FDGSML)] (**DGSML**) [127]\n- 通过图像风格化实现令人沮丧的简单领域泛化 [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2006.11207)] [[代码](https:\u002F\u002Fgithub.com\u002FGT-RIPL\u002FDomainGeneralization-Stylization)] [60]\n- 基于元学习的命名实体边界检测领域泛化 [[TNNLS 2020](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9174763\u002F)] (**METABDRY**) [124]\n- 学习如何学习单领域泛化 [[CVPR 2020](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQiao_Learning_to_Learn_Single_Domain_Generalization_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FM-ADA)] (**M-ADA**) [27]\n- 基于变分信息瓶颈的元学习用于领域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07645)] (**MetaVIB**) [15]\n- 用于领域泛化的序列学习 [[ECCV workshop 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.01377)] (**S-MLDG**) [14]\n- 具有形状感知的元学习，用于将前列腺MRI分割泛化到未见领域 [[MICCAI 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.02035)] [[代码](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FSAML)] (**SAML**) [17]\n- 多则优：一种用于领域泛化的新型多视角框架 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.12329)] [184]\n- 通过跨视觉领域的剧集式元学习实现未见领域的少样本分类 [[ICIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.13539)] (**x-EML**) [180]\n- 用于领域泛化语义分割的元学习特征批评者 [[ICIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.13538)] [185]\n- MetaNorm：学习如何跨领域对少样本批次进行归一化 [[ICLR 2021](https:\u002F\u002Fopenreview.net\u002Fpdf?id=9z_dNsC4B5t)] [[代码](https:\u002F\u002Fgithub.com\u002FYDU-AI\u002FMetaNorm)] (**MetaNorm**) [19]\n- 基于记忆的多源元学习实现未见领域的泛化，用于行人重识别 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhao_Learning_to_Generalize_Unseen_Domains_via_Memory-based_Multi-Source_Meta-Learning_for_Person_Re-Identification_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FHeliosZhao\u002FM3L)] (**M3L**) [12]\n- 不确定性引导的模型向未见领域的泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FQiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FUMGUD)] [168]\n- 通过领域随机化和元学习持续适应视觉表征 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FVolpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] (**Meta-DR**) [153]\n- 用于可泛化行人重识别的元批处理实例归一化 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_Meta_Batch-Instance_Normalization_for_Generalizable_Person_Re-Identification_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fbismex\u002FMetaBIN)] (**MetaBIN**) [13]\n- 域增强型元学习实现开放领域泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FShu_Open_Domain_Generalization_with_Domain-Augmented_Meta-Learning_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fthuml\u002FOpenDG-DAML)] (**DAML**) [119]\n- 关于无监督领域泛化的挑战 [[NeurIPS workshop 2021](https:\u002F\u002Fproceedings.mlr.press\u002Fv181\u002Fnarayanan22a\u002Fnarayanan22a.pdf)] [178]\n- 利用领域特定特征提升领域泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmanhhabui\u002FmDSDI)] (**mDSDI**) [202]\n\n### 基于集成学习的方法\n> 基于集成学习的方法主要是在每个源域上训练一个领域特定的模型，然后借助集体智慧进行准确预测。\n\n- 利用潜在领域的低秩结构实现领域泛化 [[ECCV 2014](https:\u002F\u002Flinkspringer.53yu.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-319-10578-9_41.pdf)] [87]\n- 通过网络数据学习进行视觉识别：一种弱监督的领域泛化方法 [[CVPR 2015](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fpapers\u002FNiu_Visual_Recognition_by_2015_CVPR_paper.pdf)] [89]\n- 面向视觉识别的多视角领域泛化 [[ICCV 2015](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FNiu_Multi-View_Domain_Generalization_ICCV_2015_paper.pdf)] (**MVDG**) [88]\n- 带有结构化低秩约束的深度领域泛化 [[TIP 2017](https:\u002F\u002Fpar.nsf.gov\u002Fservlets\u002Fpurl\u002F10065328)] [91]\n- 通过弱监督领域泛化从网络数据中学习进行视觉识别 [[TNNLS 2017](https:\u002F\u002Fbcmi.sjtu.edu.cn\u002Fhome\u002Fniuli\u002Fpaper\u002FVisual%20Recognition%20by%20Learning%20From%20Web%20Data%20via%20Weakly%20Supervised%20Domain%20Generalization.pdf)] [121]\n- 基于深度领域泛化的鲁棒场所分类 [[IEEE Robotics and Automation Letters 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1805.12048)] [[代码](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002Fcaffe)] (**COLD**) [97]\n- 面向视觉识别的多视角领域泛化框架 [[TNNLS 2018](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FNiu_Multi-View_Domain_Generalization_ICCV_2015_paper.pdf)] [122]\n- 带有领域特定聚合模块的领域泛化 [[GCPR 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1809.10966)] (**D-SAMs**) [92]\n- 最佳源向前：通过源特定网络实现领域泛化 [[ICIP 2018](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1806.05810)] [90]\n- 用于深度领域泛化的批归一化嵌入 [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2011.12672)] (**BNE**) [96]\n- DoFE：面向领域的特征嵌入，用于在未见数据集上实现可泛化的眼底图像分割 [[TMI 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2010.06208)] (**DoFE**) [93]\n- MS-Net：用于改善异质MRI数据下前列腺分割的多中心网络 [[TMI 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2002.03366)] [[代码](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FMS-Net)] (**MS-Net**) [95]\n- 用于领域泛化和视觉识别的广义卷积森林网络 [[ICLR 2020](https:\u002F\u002Fopenreview.net\u002Fpdf?id=H1lxVyStPH)] (**GCFN**) [126]\n- 学习优化领域特定归一化以实现领域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.04275)] (**DSON**) [94]\n- 基于Wasserstein分布鲁棒优化的类别条件领域泛化 [[ICLR研讨会2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.03676)] [175]\n- 医学图像分割中基于领域和内容自适应卷积的领域泛化 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.05676)] (**DCAC**) [176]\n- 动态解码源域知识以实现未见领域的泛化 [[arXiv 2021](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FKarthik-Nandakumar-3\u002Fpublication\u002F355142270_Dynamically_Decoding_Source_Domain_Knowledge_For_Unseen_Domain_Generalization\u002Flinks\u002F61debe18034dda1b9ef16fc6\u002FDynamically-Decoding-Source-Domain-Knowledge-For-Unseen-Domain-Generalization.pdf)] (**D2SDK**) [174]\n- 领域自适应集成学习 [[TIP 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.07325)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)] (**mini-DomainNet数据集**) [34]\n- 基于相关性感知专家混合的可泛化人员重识别 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.pdf)] (**RaMoE**) [187]\n- 学习可迁移且可解释的表示以实现领域泛化 [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475488)] (**DTN**) [131]\n- 拥抱暗知识：利用正则化知识蒸馏实现领域泛化 [[MM 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)] (**KDDG**) [157]\n- TransMatcher：通过Transformer实现深度图像匹配以进行可泛化的人员重识别 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F0f49c89d1e7298bb9930789c8ed59d48-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FShengcaiLiao\u002FQAConv)] (**TransMatcher**) [208]\n- 领域间的集成蒸馏用于领域泛化 [[ECCV 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.14058)] (**XDED**) [94]\n\n### 基于自监督学习的方法\n> 基于自监督学习的方法通过利用数据自身解决一些前置任务来提升模型的泛化能力。\n\n- 基于多任务自编码器的目标识别领域泛化 [[ICCV 2015](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FGhifary_Domain_Generalization_for_ICCV_2015_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FEmma0118\u002Fmate)] (**MTAE**, **旋转MNIST数据集**) [6]\n- 通过拼图游戏实现领域泛化 [[CVPR 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FCarlucci_Domain_Generalization_by_Solving_Jigsaw_Puzzles_CVPR_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Ffmcarlucci\u002FJigenDG)] (**JiGen**) [98]\n- 通过多任务自监督预训练提升分布外泛化能力 [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2003.13525)] [102]\n- 用于领域泛化和视觉识别的广义卷积森林网络 [[ICLR 2020](https:\u002F\u002Fopenreview.net\u002Fpdf?id=H1lxVyStPH)] (**GCFN**) [126]\n- 利用外部与内在监督进行领域泛化学习 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.09316)] [[代码](https:\u002F\u002Fgithub.com\u002Femma-sjwang\u002FEISNet)] (**EISNet**) [99]\n- 零样本领域泛化 [[BMVC 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2008.07443)] [[代码](https:\u002F\u002Fgithub.com\u002Faniketde\u002FZeroShotDG)] [100]\n- 从单一源进行域外泛化：一种不确定性量化方法 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2108.02888)] [151]\n- 跨领域的自监督学习 [[TPAMI 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.12368)] [[代码](https:\u002F\u002Fgithub.com\u002Fsilvia1993\u002FSelf-Supervised_Learning_Across_Domains)] [101]\n- 用于行人再识别的多域对抗特征泛化 [[TIP 2021](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F83\u002F9263394\u002F09311771.pdf)] (**MMFA-AAE**) [144]\n- 尺度不变的领域泛化图像复原检测 [[ICONIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.03496)] (**SADG**) [177]\n- 域增强监督对比学习下的领域泛化 [[AAAI学生摘要2021](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FSA-197.LeHS.pdf)]\n- 单一领域泛化的渐进式领域扩展网络 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- FedDG：基于连续频率空间中情节式学习的医学图像分割联邦领域泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FFedDG-ELCFS)] (**FedDG**) [147]\n- 通过噪声增强型监督自编码器提升跨领域小样本学习中的泛化能力 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FLiang_Boosting_the_Generalization_Capability_in_Cross-Domain_Few-Shot_Learning_via_Noise-Enhanced_ICCV_2021_paper.pdf)] (**NSAE**) [194]\n- 用于领域泛化的风格与语义记忆机制 [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FChen_A_Style_and_Semantic_Memory_Mechanism_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**STEAM**) [130]\n- SelfReg：用于领域泛化的自监督对比正则化 [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FKim_SelfReg_Self-Supervised_Contrastive_Regularization_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**SelfReg**) [138]\n- 基于多风格和多视角对比学习的乳腺摄影检测领域泛化 [[MICCAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.10827)] [[代码](https:\u002F\u002Fgithub.com\u002Flizheren\u002FMSVCL_MICCAI2021)] (**MSVCL**) [172]\n- 特征风格化与领域感知对比学习用于领域泛化 [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475271)] [137]\n- 用于领域泛化的对抗式师生表征学习 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa2137a2ae8e39b5002a3f8909ecb88fe-Paper.pdf)]\n- 基于对比因果学习的领域泛化 [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02655)] (**CCM**) [212]\n- 朝着无监督领域泛化迈进 [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhang_Towards_Unsupervised_Domain_Generalization_CVPR_2022_paper.pdf)] (**DARLING**) [69]\n- 通过学习跨域桥梁实现无监督领域泛化 [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FHarary_Unsupervised_Domain_Generalization_by_Learning_a_Bridge_Across_Domains_CVPR_2022_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fleokarlin\u002FBrAD)] (**BrAD**) [182]\n\n### 基于解耦表征学习的方法\n> 基于解耦表征学习的方法旨在从源数据中分离出领域特定和领域不变的部分，然后在目标域上使用领域不变的部分进行推理。\n\n- 消除数据集偏差的影响 [[ECCV 2012](https:\u002F\u002Flinkspringer.53yu.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-642-33718-5_12.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fadikhosla\u002Fundoing-bias)] [103]\n- 更深、更广且更具艺术性的领域泛化 [[ICCV 2017](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Deeper_Broader_and_ICCV_2017_paper.pdf)] [[代码](https:\u002F\u002Fdali-dl.github.io\u002Fproject_iccv2017.html)] [2]\n- DIVA：领域不变变分自编码器 [[ICML 工作坊 2019](http:\u002F\u002Fproceedings.mlr.press\u002Fv121\u002Filse20a\u002Filse20a.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FAMLab-Amsterdam\u002FDIVA)] (**DIVA**) [107]\n- 通过共性-特异性低秩分解实现高效的领域泛化 [[ICML 2020](http:\u002F\u002Fproceedings.mlr.press\u002Fv119\u002Fpiratla20a\u002Fpiratla20a.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fvihari\u002FCSD)] (**CSD**) [105]\n- 基于多领域解耦表征学习的跨领域人脸呈现攻击检测 [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_Cross-Domain_Face_Presentation_Attack_Detection_via_Multi-Domain_Disentangled_Representation_Learning_CVPR_2020_paper.pdf)] [106]\n- 学习平衡特异性和不变性以实现域内及域外泛化 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2008.12839)] [[代码](https:\u002F\u002Fgithub.com\u002Fprithv1\u002FDMG)] (**DMG**) [104]\n- 面向领域泛化的原则性解耦 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.13839)] [[代码](https:\u002F\u002Fgithub.com\u002Fhlzhang109\u002FDDG)] (**DDG**) [170]\n- 用于领域泛化语义分割的元学习特征批评者 [[ICIP 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.13538)] [185]\n- DecAug：通过分解特征表示和语义增强实现分布外泛化 [[AAAI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.09382)] [[代码](https:\u002F\u002Fgithub.com\u002FHaoyueBaiZJU\u002FDecAug)] (**DecAug**) [171]\n- Robustnet：通过实例选择性白化提升城市场景分割中的领域泛化 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_RobustNet_Improving_Domain_Generalization_in_Urban-Scene_Segmentation_via_Instance_Selective_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fshachoi\u002FRobustNet)] (**RobustNet**) [193]\n- 通过减少风格偏置来缩小领域差距 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FNam_Reducing_Domain_Gap_by_Reducing_Style_Bias_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fhyeonseobnam\u002Fsagnet)] (**SagNet**) [230]\n- 基于冲击图嵌入的形状偏置领域泛化 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FNarayanan_Shape-Biased_Domain_Generalization_via_Shock_Graph_Embeddings_ICCV_2021_paper.pdf)] [149]\n- 用于可泛化目标检测的领域不变解耦网络 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FLin_Domain-Invariant_Disentangled_Network_for_Generalizable_Object_Detection_ICCV_2021_paper.pdf)] [143]\n- 通过特征变异去相关实现领域泛化 [[MM 2021](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475311)] [146]\n- 利用领域特异性特征增强领域泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmanhhabui\u002FmDSDI)] (**mDSDI**) [202]\n- 用于领域泛化的变分解耦 [[TMLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.05826)] (**VDN**) [210]\n- 用于提升领域泛化的源内风格增强 [[WACV 2023](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.10175.pdf)] (**ISSA**) [215]\n\n### 基于正则化的方法\n> 基于正则化的方法利用正则化项来防止过拟合，或设计优化策略来指导训练。\n\n- 从多个相关分类任务中泛化到一个新的未标记样本 [[NeurIPS 2011](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2011\u002Ffile\u002Fb571ecea16a9824023ee1af16897a582-Paper.pdf)] [113]\n- MetaReg：基于元正则化的领域泛化 [[NeurIPS 2018](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2018\u002Ffile\u002F647bba344396e7c8170902bcf2e15551-Paper.pdf)] [[代码*](https:\u002F\u002Fgithub.com\u002Felliotbeck\u002FMetaReg_PyTorch)] (**MetaReg**) [4]\n- 不变风险最小化 [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf;)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)] (**IRM**, **Colored MNIST数据集**) [165]\n- 通过投影去除表面统计特征来学习鲁棒表示 [[ICLR 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1903.06256)] [[代码](https:\u002F\u002Fgithub.com\u002FHaohanWang\u002FHEX)] (**HEX**, **ImageNet-Sketch数据集**) [35]\n- 面向群体漂移的分布鲁棒神经网络——论正则化在最坏情况泛化中的重要性 [[ICLR 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.08731)] [[代码](https:\u002F\u002Fgithub.com\u002Fkohpangwei\u002Fgroup_DRO)] (**DroupDRO**) [218]\n- 自我挑战提升跨域泛化能力 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.02454)] [[代码](https:\u002F\u002Fgithub.com\u002FDeLightCMU\u002FRSC)] (**RSC**) [64]\n- 基于能量的分布外检测 [[NeurIPS 2020](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Ff5496252609c43eb8a3d147ab9b9c006-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fxieshuqin\u002FEnergy-OOD)] [181]\n- 我们何时可以将分布外泛化问题表述为不变性问题？[[arXiv 2021](https:\u002F\u002Fopenreview.net\u002Fpdf?id=FzGiUKN4aBp)] [[代码*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**IGA**) [219]\n- 学习支持预测器稳健迁移的表征 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.09940)] [[代码](https:\u002F\u002Fgithub.com\u002FNewbeeer\u002FTRM)] (**TRM**) [220]\n- SAND-mask：一种用于发现领域泛化中不变性的增强梯度掩蔽策略 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.02266)] [[代码*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**SANDMask**) [222]\n- 通过风险外推实现分布外泛化 [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fkrueger21a\u002Fkrueger21a.pdf)] (**VREx**) [190]\n- 学习难以变化的解释 [[ICLR 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.00329)] [[代码*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**ANDMask**) [221]\n- 基于傅里叶变换的领域泛化框架 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXu_A_Fourier-Based_Framework_for_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FFACT)] (**FACT**) [160]\n- 通过梯度手术实现领域泛化 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FMansilla_Domain_Generalization_via_Gradient_Surgery_ICCV_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Flucasmansilla\u002FDGvGS)] (**Agr**) [148]\n- SelfReg：面向领域泛化的自监督对比正则化 [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FKim_SelfReg_Self-Supervised_Contrastive_Regularization_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**SelfReg**) [138]\n- 拥抱暗知识：利用正则化知识蒸馏进行领域泛化 [[MM 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)]\n- 基于模型的领域泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa8f12d9486cbcc2fe0cfc5352011ad35-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Farobey1\u002Fmbdg)] (**MBDG**) [200]\n- Swad：通过寻找平坦极小值实现领域泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fbcb41ccdc4363c6848a1d760f26c28a0-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fkhanrc\u002Fswad)] (**SWAD**) [201]\n- 为未来而训练：一种简单的梯度插值损失，用于沿时间维度泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fa02ef8389f6d40f84b50504613117f88-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fanshuln\u002FTraining-for-the-Future)] (**GI**) [204]\n- 自适应风险最小化：学习适应领域漂移 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc705112d1ec18b97acac7e2d63973424-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fhenrikmarklund\u002Farm)] (**ARM**) [228]\n- 梯度饥饿：神经网络中的一种学习倾向 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F0987b8b338d6c90bbedd8631bc499221-Paper.pdf)] [[代码*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**SD**) [225]\n- 在领域泛化中量化并提升可迁移性 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F5adaacd4531b78ff8b5cedfe3f4d5212-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FGordon-Guojun-Zhang\u002FTransferability-NeurIPS2021)] [206]\n- 用于领域泛化的梯度匹配 [[ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.09937)] [[代码](https:\u002F\u002Fgithub.com\u002FYugeTen\u002Ffish)] (**Fish**) [224]\n- Fishr：用于分布外泛化的不变梯度方差 [[ICML 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.02934)] [[代码](https:\u002F\u002Fgithub.com\u002Falexrame\u002Ffishr)] (**Fishr**) [173]\n- 基于分布鲁棒性的全局-局部正则化 [[AISTATS 2023]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.00553) [[代码](https:\u002F\u002Fgithub.com\u002FVietHoang1512\u002FGLOT)] (**GLOT**) [231]\n- 通过风险分布匹配实现领域泛化 [[WACV 2024](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18598)] [[代码](https:\u002F\u002Fgithub.com\u002Fnktoan\u002Frisk-distribution-matching)] (**RDM**) [234]\n\n### 基于归一化的方法\n> 基于归一化的方法通过使用各自的数据统计量对来自不同域的数据进行归一化来实现校准。\n\n- Deep CORAL：用于深度领域适应的相关性对齐 [[ECCV 2016](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.01719)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**CORAL**) [229]\n- 用于深度领域泛化的批归一化嵌入 [[arXiv 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2011.12672)] (**BNE**) [96]\n- 学习优化领域特定的归一化以实现领域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.04275)] (**DSON**) [94]\n- MetaNorm：学习在跨域的小样本批次上进行归一化 [[ICLR 2021](https:\u002F\u002Fopenreview.net\u002Fpdf?id=9z_dNsC4B5t)] [[代码](https:\u002F\u002Fgithub.com\u002FYDU-AI\u002FMetaNorm)] (**MetaNorm**) [19]\n- 用于可泛化行人重识别的元批-实例归一化 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_Meta_Batch-Instance_Normalization_for_Generalizable_Person_Re-Identification_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fbismex\u002FMetaBIN)] (**MetaBIN**) [13]\n- - 通过基于记忆的多源元学习实现对未见域的泛化，用于行人重识别 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhao_Learning_to_Generalize_Unseen_Domains_via_Memory-based_Multi-Source_Meta-Learning_for_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FHeliosZhao\u002FM3L)] (**M3L**) [12]\n- 面向单领域泛化的对抗自适应归一化 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FFan_Adversarially_Adaptive_Normalization_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] (**ASR**) [116]\n- 面向去中心化领域泛化与适应的协同优化与聚合 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWu_Collaborative_Optimization_and_Aggregation_for_Decentralized_Domain_Generalization_and_Adaptation_ICCV_2021_paper.pdf)] (**COPDA**) [159]\n- 通过第一人称动作识别中的音视频相对范数对齐实现领域泛化 [[WACV 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FPlanamente_Domain_Generalization_Through_Audio-Visual_Relative_Norm_Alignment_in_First_Person_WACV_2022_paper.pdf)] (**RNA-Net**) [186]\n\n### 基于信息论的方法\n> 基于信息论的方法利用信息论技术来实现领域泛化。\n\n- 使用变分信息瓶颈进行领域泛化的元学习 [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07645)] (**MetaVIB**) [15]\n- 用于单领域泛化的渐进式领域扩展网络 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- 为单领域泛化而学习多样化 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWang_Learning_To_Diversify_for_Single_Domain_Generalization_ICCV_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FBUserName\u002FLearning)] [158]\n- 不变性原则与信息瓶颈结合用于分布外泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fahujak\u002FIB-IRM)] (**IB-IRM**) [207]\n- 利用领域特异性特征提升领域泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb0f2ad44d26e1a6f244201fe0fd864d1-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmanhhabui\u002FmDSDI)] (**mDSDI**) [202]\n- 用于领域泛化的不变信息瓶颈 [[AAAI 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06333)] [[代码](https:\u002F\u002Fgithub.com\u002FLuodian\u002FIIB\u002Ftree\u002FIIB)] (**IIB**) [140]\n\n### 基于因果的方法\n> 基于因果的方法从因果视角分析并解决领域泛化问题。\n\n- 不变风险最小化 [[arXiv 2019](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf;)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization)] (**IRM**, **彩色MNIST数据集**) [165]\n- 利用工具变量学习领域不变关系以实现领域泛化 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01438)] (**IV-DG**) [163]\n- 用于分布泛化的因果框架 [[TPAMI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.07433)] [[代码](https:\u002F\u002Frunesen.github.io\u002FNILE\u002F)] (**NILE**) [191]\n- 基于因果匹配的领域泛化 [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fmahajan21b\u002Fmahajan21b.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Frobustdg)] (**MatchDG**) [73]\n- 用于分布外泛化的深度稳定学习 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Deep_Stable_Learning_for_Out-of-Distribution_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fxxgege\u002FStableNet)] (**StableNet**) [117]\n- 通过风险外推实现分布外泛化 [[ICML 2021](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fkrueger21a\u002Fkrueger21a.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**VREx**) [217]\n- 用于领域泛化的风格与语义记忆机制 [[ICCV 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FChen_A_Style_and_Semantic_Memory_Mechanism_for_Domain_Generalization_ICCV_2021_paper.pdf)] (**STEAM**) [130]\n- 学习用于分布外预测的因果语义表征 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F310614fca8fb8e5491295336298c340f-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fchangliu00\u002Fcausal-semantic-generative-model)] (**CSG-ind**) [145]\n- 恢复潜在因果因子以应对分布偏移的泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F8c6744c9d42ec2cb9e8885b54ff744d0-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fwubotong\u002FLaCIM)] (**LaCIM**) [152]\n- 关于校准与域外泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F118bd558033a1016fcc82560c65cca5f-Paper.pdf)]\n- 不变性原理与信息瓶颈结合用于分布外泛化 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fahujak\u002FIB-IRM)] (**IB-ERM**, **IB-IRM**) [207]\n- 基于对比因果学习的领域泛化 [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02655)] (**CCM**) [212]\n- 通过分布匹配实现不变的因果机制 [[arXiv 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.11646)] [[代码*](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed)] (**CausIRL-CORAL**, **CausIRL-MMD**) [216]\n- 用于领域泛化的不变信息瓶颈 [[AAAI 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06333)] [[代码](https:\u002F\u002Fgithub.com\u002FLuodian\u002FIIB\u002Ftree\u002FIIB)] (**IIB**) [140]\n- 基于风格迁移的因果推断用于分布外泛化 [[KDD 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.03063)] [[代码](https:\u002F\u002Fgithub.com\u002Fnktoan\u002FCausal-Inference-via-Style-Transfer-for-OOD-Generalisation)] (**FAST, FAFT, FAGT**) [233]\n\n### 推理时方法\n> 推理时方法利用推理时可用的无标签目标数据，在无需进一步训练模型的情况下提升泛化性能。\n\n- 通过推理时保持标签的目标投影实现对未见领域的泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FPandey_Generalization_on_Unseen_Domains_via_Inference-Time_Label-Preserving_Target_Projections_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fyys-Polaris\u002FInferenceTimeDG)] [118]\n- 面向真实世界领域泛化的自适应方法 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDubey_Adaptive_Methods_for_Real-World_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fabhimanyudubey\u002FGeoYFCC)] (**DA-ERM**) [132]\n- 用于模型无关领域泛化的测试时分类器调整模块 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F1415fe9fea0fa1e45dddcff5682239a0-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmatsuolab\u002FT3A)] (**T3A**) [136]\n\n### 基于神经架构搜索的方法\n> 基于神经架构搜索的方法旨在动态调整网络架构，以提升分布外泛化能力。\n\n- NAS-OoD：用于分布外泛化的神经架构搜索 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FBai_NAS-OoD_Neural_Architecture_Search_for_Out-of-Distribution_Generalization_ICCV_2021_paper.pdf)] (**NAS-OoD**) [129]\n\n## 单一领域泛化\n> 单一领域泛化任务的目标是仅利用一个源域的数据，提升模型在未知目标域上的性能。\n\n- 学会学习单一领域泛化 [[CVPR 2020](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FQiao_Learning_to_Learn_Single_Domain_Generalization_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FM-ADA)] (**M-ADA**) [27]\n- 从单个源域进行域外泛化：一种不确定性量化方法 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2108.02888)] [151]\n- 不确定性引导的模型泛化至未见领域 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FQiao_Uncertainty-Guided_Model_Generalization_to_Unseen_Domains_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fjoffery\u002FUMGUD)] [168]\n- 用于单一领域泛化的对抗自适应归一化 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FFan_Adversarially_Adaptive_Normalization_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)]  (**ASR**) [116]\n- 用于单一领域泛化的渐进式领域扩展网络 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_Progressive_Domain_Expansion_Network_for_Single_Domain_Generalization_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Flileicv\u002FPDEN)] (**PDEN**) [141]\n- 为单一领域泛化而学习多样化 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWang_Learning_To_Diversify_for_Single_Domain_Generalization_ICCV_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FBUserName\u002FLearning)] [158]\n- 来源内风格增强以提升领域泛化 [[WACV 2023](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.10175.pdf)] (**ISSA**) [215]\n- 在单一源域泛化中为验证和训练设计分布偏移 [[WACV 2025](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19774)] [[代码](https:\u002F\u002Fgithub.com\u002FNikosEfth\u002Fcrafting-shifts)] [232]\n\n## 半监督\u002F弱监督\u002F无监督领域泛化\n> 半监督\u002F弱监督领域泛化假设源数据中有一部分是未标记的，而无监督领域泛化则假设没有任何训练监督。\n\n- 通过学习网络数据进行视觉识别：一种弱监督领域泛化方法 [[CVPR 2015](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2015\u002Fpapers\u002FNiu_Visual_Recognition_by_2015_CVPR_paper.pdf)] [89]\n- 基于弱监督领域泛化的网络数据学习进行视觉识别 [[TNNLS 2017](https:\u002F\u002Fbcmi.sjtu.edu.cn\u002Fhome\u002Fniuli\u002Fpaper\u002FVisual%20Recognition%20by%20Learning%20From%20Web%20Data%20via%20Weakly%20Supervised%20Domain%20Generalization.pdf)] [121]\n- 基于半监督元学习的领域泛化 [[arXiv 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.12658)] [[代码](https:\u002F\u002Fgithub.com\u002Fhosseinshn\u002FDGSML)] (**DGSML**) [127]\n- 用于变转速下旋转机械故障诊断的深度半监督领域泛化网络 [[IEEE Transactions on Instrumentation and Measurement 2020](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FYixiao-Liao\u002Fpublication\u002F341199775_Deep_Semisupervised_Domain_Generalization_Network_for_Rotary_Machinery_Fault_Diagnosis_Under_Variable_Speed\u002Flinks\u002F613f088201846e45ef450a0a\u002FDeep-Semisupervised-Domain-Generalization-Network-for-Rotary-Machinery-Fault-Diagnosis-Under-Variable-Speed.pdf)] (**DSDGN**) [125]\n- 基于随机StyleMatch的半监督领域泛化 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2106.00592)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fssdg-benchmark)] (**StyleMatch**) [54]\n- 更好的伪标签联合域感知标签与双分类器用于半监督领域泛化 [[arXiv 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2110.04820)] [156]\n- 现实世界中的半监督领域泛化：新基准与强基线 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.10221)] [179]\n- 关于无监督领域泛化的挑战 [[NeurIPS workshop 2021](https:\u002F\u002Fproceedings.mlr.press\u002Fv181\u002Fnarayanan22a\u002Fnarayanan22a.pdf)] [178]\n- 针对单个标注领域的特定领域偏差过滤 [[IJCV 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.00726)] [[代码](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FDSBF)] (**DSBF**) [162]\n- 朝着无监督领域泛化迈进 [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhang_Towards_Unsupervised_Domain_Generalization_CVPR_2022_paper.pdf)] (**DARLING**) [69]\n- 通过学习跨域桥梁实现无监督领域泛化 [[CVPR 2022](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FHarary_Unsupervised_Domain_Generalization_by_Learning_a_Bridge_Across_Domains_CVPR_2022_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fleokarlin\u002FBrAD)] (**BrAD**) [182]\n- 基于协作式探索与泛化的低标签效率领域泛化 [[MM 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.03644)] [[代码](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCEG)] (**CEG**) [211]\n\n## 开放\u002F异构领域泛化\n> 开放\u002F异构领域泛化假设一个领域的标签空间与另一个领域的标签空间不同。\n\n- 用于异构领域泛化的特征批评者网络 [[ICML 2019](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fli19l\u002Fli19l.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fliyiying\u002FFeature_Critic)] (**Feature-Critic**) [5]\n- 领域泛化的周期性训练 [[ICCV 2019](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLi_Episodic_Training_for_Domain_Generalization_ICCV_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FHAHA-DL\u002FEpisodic-DG)] (**Epi-FCR**) [7]\n- 朝着在未知领域中识别未见类别迈进 [[ECCV 2020](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2007.12256.pdf?ref=https:\u002F\u002Fgithubhelp.com)] [[代码](https:\u002F\u002Fgithub.com\u002Fmancinimassimiliano\u002FCuMix)] (**CuMix**) [57]\n- 基于领域混合的异构领域泛化 [[ICASSP 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.05448)] [[代码](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FMIXALL)] [128]\n- 域增强型元学习的开放领域泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FShu_Open_Domain_Generalization_with_Domain-Augmented_Meta-Learning_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fthuml\u002FOpenDG-DAML)] (**DAML**) [119]\n- 跨类和跨域的通用跨域检索 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FPaul_Universal_Cross-Domain_Retrieval_Generalizing_Across_Classes_and_Domains_ICCV_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fmvp18\u002FUCDR)] (**SnMpNet**) [150]\n\n\n## 联邦领域泛化\n> 联邦领域泛化假设源数据是分布式的，出于数据隐私保护的原因无法融合。\n\n- 分离领域泛化的协同语义聚合与校准 [[arXiv 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.06736)] [[代码](https:\u002F\u002Fgithub.com\u002Fjunkunyuan\u002FCSAC)] (**CSAC**) [161]\n- FedDG：基于连续频率空间中周期性学习的医学图像分割联邦领域泛化 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_FedDG_Federated_Domain_Generalization_on_Medical_Image_Segmentation_via_Episodic_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fliuquande\u002FFedDG-ELCFS)] (**FedDG**) [147]\n- 用于去中心化领域泛化与适应的协同优化与聚合 [[ICCV 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FWu_Collaborative_Optimization_and_Aggregation_for_Decentralized_Domain_Generalization_and_Adaptation_ICCV_2021_paper.pdf)] (**COPDA**) [159]\n\n## 无源领域泛化\n> 无源领域泛化旨在提高模型对任意未见领域的泛化能力，而不使用任何源域数据。\n\n- PromptStyler：基于提示驱动的风格生成用于无源领域泛化 [[ICCV 2023](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15199)] [[项目页面](https:\u002F\u002FPromptStyler.github.io\u002F)] (**PromptStyler**) [231]\n\n## 应用\n\n### 人员重识别\n- 面向领域泛化的深度域对抗图像生成 [[AAAI 2020](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fdownload\u002F7003\u002F6857)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)]\n- 学习生成新领域以实现领域泛化 [[ECCV 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304)] [[代码](https:\u002F\u002Fgithub.com\u002Fmousecpn\u002FL2A-OT)] (**L2A-OT**, **Digits-DG数据集**) [28]\n- 学习可泛化的全尺度表征用于人员重识别 [[TPAMI 2021](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.06827)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fdeep-person-reid)] [114]\n- 用于人员重识别的多领域对抗特征泛化 [[TIP 2021](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F83\u002F9263394\u002F09311771.pdf)] (**MMFA-AAE**) [144]\n- 基于Mixstyle的领域泛化 [[ICLR 2021](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F2104.02008)] [[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002Fmixstyle-release)] (**MixStyle**) [56]\n- 通过基于记忆的多源元学习来泛化未见领域的人员重识别 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhao_Learning_to_Generalize_Unseen_Domains_via_Memory-based_Multi-Source_Meta-Learning_for_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FHeliosZhao\u002FM3L)] (**M3L**) [12]\n- 用于可泛化人员重识别的元批归一化 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChoi_Meta_Batch-Instance_Normalization_for_Generalizable_Person_Re-Identification_CVPR_2021_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fbismex\u002FMetaBIN)] (**MetaBIN**) [13]\n- 基于相关性感知专家混合的可泛化人员重识别 [[CVPR 2021](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDai_Generalizable_Person_Re-Identification_With_Relevance-Aware_Mixture_of_Experts_CVPR_2021_paper.pdf)] (**RaMoE**) [187]\n- TransMatcher：通过Transformer进行深度图像匹配以实现可泛化人员重识别 [[NeurIPS 2021](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F0f49c89d1e7298bb9930789c8ed59d48-Paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FShengcaiLiao\u002FQAConv)] (**TransMatcher**) [208]\n\n### 人脸识别与防欺骗\n- 用于人脸呈现攻击检测的多对抗判别式深度领域泛化 [[CVPR 2019](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShao_Multi-Adversarial_Discriminative_Deep_Domain_Generalization_for_Face_Presentation_Attack_Detection_CVPR_2019_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Frshaojimmy\u002FCVPR2019-MADDoG)] (**MADDG**) [78]\n- 向深度人脸识别的通用表征学习迈进 [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FShi_Towards_Universal_Representation_Learning_for_Deep_Face_Recognition_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FMatyushinMA\u002Funi_rep_deep_faces)] [22]\n- 基于多领域解耦表征学习的跨领域人脸呈现攻击检测 [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_Cross-Domain_Face_Presentation_Attack_Detection_via_Multi-Domain_Disentangled_Representation_Learning_CVPR_2020_paper.pdf)] [106]\n- 用于人脸防欺骗的单侧领域泛化 [[CVPR 2020](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJia_Single-Side_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2020_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Ftaylover-pei\u002FSSDG-CVPR2020)] (**SSDG**) [79]\n\n## 相关主题\n### 终身学习\n- 用于领域泛化的顺序学习 [[ECCV研讨会2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.01377)] (**S-MLDG**) [14]\n- 基于领域随机化和元学习的视觉表征持续适应 [[CVPR 2021](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FVolpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] (**Meta-DR**) [153]\n\n# 出版物\n\n| 顶级会议  |  论文数量  |\n|  ----  | ----  |\n|  2014年之前  |  **CVPR:** [8], [11]; **ICCV:** [16], [41]; **NeurIPS:** [31], [113]; **ECCV:** [32], [87], [103]; **ICML:** [65]  |\n|  2015年  |  **CVPR:** [89]; **ICML:** [30]; **ICCV:** [6], [46], [88]  |\n|  2016年  |  **CVPR:** [42], [44], [120]; IJCAI: [66]; **ECCV:** [43], [47], [229]  |\n|  2017年  |  **CVPR:** [20]; **ICCV:** [2], [71]; **NeurIPS:** [38]  |\n|  2018年  |  **ICLR:** [1], [68]; **ICLR:** [53]; **CVPR:** [76]; **ECCV:** [45], [77]; **NeurIPS:** [4], [25]  |\n|  2019年  |  **ICLR:** [35], [37]; **CVPR:** [78], [98]; **ICML:** [5], [107], [110]; **ICCV:** [7], [21], [33], [62], [63]; **NeurIPS:** [18]  |\n|  2020年  |  **ICLR:** [55], [83], [218]; **ICLR:** [126]; **CVPR:** [22], [27], [79], [106]; **ICML:** [105]; **ECCV:** [14], [15], [28], [57], [64], [94], [99], [104]; **NeurIPS:** [75], [86], [112], [181]  |\n|  2021年  |  **ICLR:** [19], [56], [59], [134], [175], [196]; **ICLR:** [139], [171], [221]; **CVPR:** [12], [13], [115], [116], [117], [118], [119], [132], [141], [147], [153], [160], [168], [187], [193]; IJCAI: [155], [195], [230]; **ICML:** [73], [190], [217]; **ICCV:** [129], [130], [133], [135], [138], [142], [143], [148], [149], [150], [158], [159], [194]; **MM:** [131], [137], [146], [157]; **NeurIPS:** [136], [145], [152], [154], [198], [199], [200], [201], [202], [203], [204], [205], [206], [207], [208], [228], [225]  |\n|  2022年  |  **AAAI:** [140]; **ICLR:** [213], [224]; **CVPR:** [69], [182], [214]; **ICML**: [173]; **MM:** [211]  |\n|  2023年  |  **WACV:** [215]; **ICLR:** [223]; **ICCV:** [231]; **KDD:** [233] |\n|  2024年  |  **WACV:** [234] |\n|  2025年  |  **WACV:** [232] |\n\n| 顶级期刊  |  论文数量  |\n|  ----  | ----  |\n|  2017年之前  |  **IJCV:** [9], [10]; **JMLR:** [226]  |\n|  2017年  |  **TPAMI:** [67]; **TIP:** [91]  |\n|  2021年  |  **TIP:** [34], [144]; **TPAMI:** [101], [114], [191], [197]; **JMLR:** [188]  |\n|  2022年  |  **TMLR:** [209], [210]; **IJCV:** [162]  |\n\n| arXiv  |  论文数量  |\n|  ----  | ----  |\n|  2014年之前  |  [40]  |\n|  2017年  |  [36], [52]  |\n|  2018年  |  [166]  |\n|  2019年  |  [81], [123], [165], [169]  |\n|  2020年  |  [60], [82], [96], [102], [127], [189], [227]  |\n|  2021年  |  [3], [54], [58], [151], [156], [161], [163], [170], [174], [176], [178], [179], [184], [192], [219], [222]  |\n|  2022年  |  [183], [212], [216], [220]  |\n\n| 其他  |  论文数量  |\n|  ----  |  ----  |\n|  2018年之前  |  [29], [39], [48], [49], [50], [51], [90], [92], [97], [109], [111], [121], [122]  |\n|  2019年  |  [26], [72], [84], [167]  |\n|  2020年  |  [17], [23], [24], [61], [70], [74], [80], [85], [93], [95], [100], [124], [125], [128], [164]  |\n|  2021年  |  [108], [172], [177], [180], [185]  |\n|  2022年  |  [186]  |\n\n# 数据集\n> 以下数据集的评估通常采用留一域的协议：随机选择一个域作为目标域进行留出，其余域则用作源域。\n>\n| 数据集（下载链接） | 描述 | 相关论文 |\n| :---- | :----: | :----: |\n| **Colored MNIST** [[165]](https:\u002F\u002Farxiv.53yu.com\u002Fpdf\u002F1907.02893.pdf) | 手写数字识别；3个域：{0.1, 0.3, 0.9}；7万张尺寸为(2, 28, 28)的样本；2个类别 | [82], [138], [140], [149], [152], [154], [165], [171], [173], [190], [200], [202], [214], [216], [217], [219], [220], [222], [224], [234] |\n| **Rotated MNIST** [[6]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2015\u002Fpapers\u002FGhifary_Domain_Generalization_for_ICCV_2015_paper.pdf) ([原版](https:\u002F\u002Fgithub.com\u002FEmma0118\u002Fmate)) | 手写数字识别；6个旋转角度的域：{0, 15, 30, 45, 60, 75}；7千张尺寸为(1, 28, 28)的样本；10个类别 | [5], [6], [15], [35], [53], [55], [63], [71], [73], [74], [76], [77], [86], [90], [105], [107], [138], [140], [170], [173], [202], [204], [206], [216], [222], [224] |\n| **Digits-DG** [[28]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.03304) | 手写数字识别；4个域：{MNIST [[29]](http:\u002F\u002Flushuangning.oss-cn-beijing.aliyuncs.com\u002FCNN%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97\u002FGradient-Based_Learning_Applied_to_Document_Recognition.pdf), MNIST-M [[30](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fganin15.pdf)], SVHN [[31](https:\u002F\u002Fresearch.google\u002Fpubs\u002Fpub37648.pdf)], SYN [[30](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fganin15.pdf)]}；2.4万张样本；10个类别 | [21], [25], [27], [28], [34], [35], [55], [59], [63], [94], [98], [116], [118], [130], [141], [142], [146], [151], [153], [157], [158], [159], [160], [166], [168], [179], [189], [203], [209], [210], [232] ,[233] |\n| **VLCS** [[16]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_iccv_2013\u002Fpapers\u002FFang_Unbiased_Metric_Learning_2013_ICCV_paper.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Fuc?id=1skwblH1_okBwxWxmRsp9_qi15hyPpxg8); 或 [原版](https:\u002F\u002Fwww.mediafire.com\u002Ffile\u002F7yv132lgn1v267r\u002Fvlcs.tar.gz\u002Ffile)) | 物体识别；4个域：{Caltech [[8]](http:\u002F\u002Fwww.vision.caltech.edu\u002Fpublications\u002FFei-FeiCompVIsImageU2007.pdf), LabelMe [[9]](https:\u002F\u002Fidp.springer.com\u002Fauthorize\u002Fcasa?redirect_uri=https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs11263-007-0090-8.pdf&casa_token=n3w4Sen-huAAAAAA:sJY2dHreDGe2V4KE9jDehftM1W-Sn1z8bqeF_WK8Q9t4B0dFk5OXEAlIP7VYnr8UfiWLAOPG7dK0ZveYWs8), PASCAL [[10]](https:\u002F\u002Fidp.springer.com\u002Fauthorize\u002Fcasa?redirect_uri=https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs11263-009-0275-4.pdf&casa_token=Zb6LfMuhy_sAAAAA:Sqk_aoTWdXx37FQjUFaZN9ZMQxrUhqO2S_HbOO2a9BKtejW7CMekg-3PDVw6Yjw7BZqihyjP0D_Y6H2msBo), SUN [[11]](https:\u002F\u002Fdspace.mit.edu\u002Fbitstream\u002Fhandle\u002F1721.1\u002F60690\u002FOliva_SUN%20database.pdf?sequence=1&isAllowed=y)}；10,729张尺寸为(3, 224, 224)的样本；5个类别；约3.6 GB | [2], [6], [7], [14], [15], [18], [60], [61], [64], [67], [68], [70], [71], [74], [76], [77], [81], [83], [86], [91], [98], [99], [101], [102], [103], [117], [118], [126], [127], [131], [132], [136], [138], [140], [142], [145], [146], [148], [149], [161], [170], [173], [174], [184], [190], [195], [199], [201], [202], [203], [209], [216], [217], [222], [223], [224], [231], [233], [234] |\n| **Office31+Caltech** [[32]](https:\u002F\u002Flinkspringer.53yu.com\u002Fcontent\u002Fpdf\u002F10.1007\u002F978-3-642-15561-1_16.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F14OIlzWFmi5455AjeBZLak2Ku-cFUrfEo\u002Fview)) | 物体识别；4个域：{Amazon、Webcam、DSLR、Caltech}；4,652张样本，分为31个类别（office31）或10个类别（office31+caltech）；51 MB | [6], [35], [67], [68], [70], [71], [80], [91], [96], [119], [131], [167] |\n| **OfficeHome** [[20]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FVenkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Fuc?id=1uY0pj7oFsjMxRwaD3Sxy0jgel0fsYXLC); 或 [原版](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F0B81rNlvomiwed0V1YUxQdC1uOTg\u002Fview?resourcekey=0-2SNWq0CDAuWOBRRBL7ZZsw)) | 物体识别；4个域：{Art、Clipart、Product、Real World}；15,588张尺寸为(3, 224, 224)的样本；65个类别；1.1 GB | [19], [54], [28], [34], [55], [58], [60], [61], [64], [80], [92], [94], [98], [101], [118], [126], [130], [131], [132], [133], [137], [138], [140], [146], [148], [156], [159], [160], [162], [163], [167], [173], [174], [178], [179], [182], [184], [189], [190], [199], [201], [202], [203], [206], [211], [212], [214], [216], [217], [220], [222], [223], [224], [230], [231], [233], [234] |\n| **PACS** [[2]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLi_Deeper_Broader_and_ICCV_2017_paper.pdf) ([1](https:\u002F\u002Fdrive.google.com\u002Fuc?id=1JFr8f805nMUelQWWmfnJR3y4_SYoN5Pd); 或 [原版](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F0B6x7gtvErXgfUU1WcGY5SzdwZVk?resourcekey=0-2fvpQY_QSyJf2uIECzqPuQ)) | 物体识别；4个域：{photo、art_painting、cartoon、sketch}；9,991张尺寸为(3, 224, 224)的样本；7个类别；174 MB | [1], [2], [4], [5], [14], [15], [18], [19], [34], [54], [28], [35], [55], [56], [57], [58], [59], [60], [61], [64], [69], [73], [77], [80], [81], [82], [83], [84], [86], [90], [92], [94], [96], [98], [99], [101], [102], [104], [105], [116], [117], [118], [127], [129], [130], [131], [132], [136], [137], [138], [139], [140], [142], [14…\n\n# 图书馆\n> 我们列出了领域泛化相关的 GitHub 仓库（按星数排序）。\n\n- [DeepDG (jindongwang)](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)：深度领域泛化工具包。\n- [迁移学习库 (thuml)](https:\u002F\u002Fgithub.com\u002Fthuml\u002FTransfer-Learning-Library) 用于领域适应、任务适应和领域泛化。\n- [DomainBed (facebookresearch)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDomainBed) [134] 是一个用于测试领域泛化算法的工具集。\n- [Dassl (KaiyangZhou)](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)：一个用于领域适应、半监督学习和领域泛化的 PyTorch 工具箱。\n\n# 讲座、教程与报告\n- **（2021 年报告）** 泛化到未见领域：领域泛化综述 [155]。[[视频](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1ro4y1S7dd\u002F)] [[幻灯片](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002FDGSurvey-ppt.pdf)] *(王进东（MSRA），中文演讲)*\n\n# 其他资源\n- 由 [amber0309](https:\u002F\u002Fgithub.com\u002Famber0309\u002FDomain-generalization) 整理的领域泛化论文合集。\n- 由 [jindongwang](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#domain-generalization) 整理的领域泛化论文合集。\n- 由 [yfzhang114](https:\u002F\u002Fgithub.com\u002Fyfzhang114\u002FGeneralization-Causality) 整理的关于领域泛化、领域适应、因果关系、鲁棒性、提示学习、优化、生成模型等方面的论文合集。\n- 基于深度神经网络的视觉识别中跨领域的适应与泛化 [[2020 年博士论文，周凯阳（萨里大学）]](https:\u002F\u002Fopenresearch.surrey.ac.uk\u002Fesploro\u002Foutputs\u002Fdoctoral\u002FAdaptation-and-Generalization-Across-Domains-in\u002F99513024202346)] [164]\n\n# 贡献与联系\n欢迎为我们的仓库贡献力量。\n\n- 如果您想 *纠正错误*，请直接操作；\n- 如果您想 *添加或更新论文*，请完成以下步骤（如有必要）：\n    1. 找到最大索引（当前最大：**[234]**，暂无空余），并创建一个新的。\n    2. 更新 [出版物](#publications)。\n    3. 更新 [论文](#papers)。\n    4. 更新 [数据集](#datasets)。\n- 如有任何 *问题或建议*，请通过电子邮件（yuanjk@zju.edu.cn）或 GitHub 问题与我们联系。\n\n感谢您的合作与贡献！\n\n# 致谢\n[目录](#contents) 的设计层次主要参考了 [awesome-domain-adaptation](https:\u002F\u002Fgithub.com\u002Fzhaoxin94\u002Fawesome-domain-adaptation#unsupervised-da)。\n- 我们参考了 [3] 来设计 [目录](#contents) 和 [数据集](#datasets) 表格。","# Awesome-Domain-Generalization 快速上手指南\n\n`Awesome-Domain-Generalization` 并非一个单一的代码库或工具包，而是一个**领域泛化（Domain Generalization, DG）**方向的精选资源列表，汇集了相关的论文、数据集、开源代码实现及教程。本指南将帮助开发者快速利用该列表找到所需的算法代码和数据集，并搭建基础开发环境。\n\n## 环境准备\n\n在开始复现列表中的算法或研究相关论文前，请确保满足以下系统要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python 版本**: 建议 Python 3.8 或更高版本（具体取决于所选子项目的依赖）。\n*   **深度学习框架**: 大多数项目基于 **PyTorch**，部分可能支持 TensorFlow。\n*   **硬件**: 推荐使用 NVIDIA GPU (CUDA 11.0+) 以加速训练和评估。\n*   **前置依赖**:\n    *   Git\n    *   pip 或 conda (推荐用于管理虚拟环境)\n\n## 安装步骤\n\n由于该仓库是资源索引，你需要根据需求克隆具体的算法仓库或安装通用的 DG 工具库（如列表中推荐的 `Dassl.pytorch`）。\n\n### 1. 克隆资源列表仓库\n首先获取最新的论文和代码链接列表：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjindongwang\u002FAwesome-Domain-Generalization.git\ncd Awesome-Domain-Generalization\n```\n\n### 2. 安装通用开发库 (推荐)\n列表中多次提及 **[Dassl.pytorch](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FDassl.pytorch)**，这是一个集成了多种域适应和域泛化算法的强力工具箱，适合快速上手。\n\n**使用 pip 安装：**\n```bash\npip install dassl\n```\n\n**或使用 conda 创建独立环境并安装（推荐）：**\n```bash\nconda create -n dg_env python=3.8\nconda activate dg_env\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\npip install dassl\n```\n\n> **国内加速提示**：如果遇到下载速度慢的问题，建议使用清华或阿里镜像源：\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple dassl\n> ```\n\n### 3. 获取特定算法代码\n若需运行列表中特定的 SOTA 算法（如 `IRM`, `DANN`, `L2A-OT` 等），请直接访问 README 中对应论文下方的 `[Code]` 链接进行克隆。例如：\n```bash\n# 示例：克隆 IRM 官方实现\ngit clone https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FInvariantRiskMinimization.git\ncd InvariantRiskMinimization\npip install -r requirements.txt\n```\n\n## 基本使用\n\n以下以使用 `Dassl.pytorch` 运行一个基础的域泛化实验为例（基于 PACS 数据集，使用 ERM 基线方法）：\n\n### 1. 准备数据集\n下载列表中提到的 **PACS dataset**，并按框架要求的目录结构存放（通常为 `data\u002Fpacs`）。\n\n### 2. 运行训练脚本\n使用 `dassl` 命令行工具启动训练。以下命令表示使用 3 个源域训练，并在剩下的 1 个目标域上测试：\n\n```bash\npython train.py \\\n    --root .\u002Fdata \\\n    --trainer ERM \\\n    --source-domains art painting sketch \\\n    --target-domains photo \\\n    --dataset-config-file configs\u002Fdg\u002Fdigit_dg\u002Fpacs.yaml \\\n    --output-dir output\u002Fpacs_erm\n```\n\n### 3. 查看结果\n训练完成后，模型权重和日志将保存在 `output\u002Fpacs_erm` 目录下。你可以查看 `log.txt` 获取准确率指标，或使用提供的评估脚本对模型进行测试。\n\n---\n**提示**：对于列表中其他特定算法（如基于元学习或因果推断的方法），请参考其各自仓库的 `README.md` 获取专属的运行指令，因为不同项目的参数配置可能存在差异。","某医疗 AI 团队正致力于开发一款能适配不同医院设备的肺炎 X 光诊断模型，却苦于训练数据仅来自少数几家医院，导致模型在新医院表现大幅下滑。\n\n### 没有 Awesome-Domain-Generalization 时\n- **文献检索如大海捞针**：研究人员需手动在 arXiv、Google Scholar 等平台筛选“域泛化”相关论文，耗时数周仍难以覆盖最新进展，极易遗漏关键算法。\n- **技术路线选择盲目**：面对域对齐、元学习、数据增强等十几种技术流派，团队缺乏系统分类指引，只能凭经验盲目尝试，导致大量算力浪费在无效方案上。\n- **复现代码成本高昂**：找到论文后，往往找不到官方开源代码，或需在不同仓库中拼凑依赖，环境配置与调试占据了 80% 的开发时间。\n- **应用场景匹配困难**：难以快速确认哪些方法已在“医学影像”或“人脸防伪”等具体场景验证过，增加了项目落地的不确定性。\n\n### 使用 Awesome-Domain-Generalization 后\n- **一站式获取前沿资源**：团队直接利用其分类清晰的论文列表（如按“基于因果的方法”或“推理时方法”筛选），半天内即可锁定 ICLR、CVPR 等顶会的核心成果。\n- **精准定位技术方案**：借助详细的目录结构，迅速对比不同方法的优劣，直接选定适合小样本医疗数据的“基于正则化”或“解耦表示学习”路线。\n- **代码复用效率倍增**：通过列表中附带的官方及高质量非官方代码链接，快速复现基准模型，将原本数周的搭建工作缩短至两天。\n- **垂直领域案例参考**：直接查阅\"Applications\"章节下的人体重识别与人脸防伪案例，为医疗场景的迁移提供可借鉴的调参策略与评估指标。\n\nAwesome-Domain-Generalization 将原本分散杂乱的域泛化研究资源整合为结构化知识库，让研发团队从繁琐的调研中解放，专注于核心算法的创新与落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjunkunyuan_Awesome-Domain-Generalization_f86f3642.png","junkunyuan","Junkun Yuan","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjunkunyuan_aaea8331.jpg","Think big, start small, iterate often.","@Tencent","Shenzhen, China",null,"https:\u002F\u002Fgithub.com\u002Fjunkunyuan",530,53,"2026-04-03T12:39:55",5,"","未说明",{"notes":90,"python":88,"dependencies":91},"该仓库是一个关于“域泛化”（Domain Generalization）的论文、代码和数据集的资源列表（Awesome List），本身不是一个可直接运行的单一软件工具。因此 README 中未包含具体的操作系统、硬件配置或依赖库版本要求。用户需根据列表中引用的具体论文项目（如 DANN, IRM, DomainBed 等）的代码仓库分别查看其独立的运行环境需求。",[],[51,13],[94,95,96,97,98,99,100],"domain-generalization","awesome-list","deep-learning","datasets","libraries","papers","awesome","2026-03-27T02:49:30.150509","2026-04-06T08:45:32.529082",[],[]]