[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jindongwang--transferlearning":3,"tool-jindongwang--transferlearning":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":118,"forks":119,"last_commit_at":120,"license":121,"difficulty_score":109,"env_os":122,"env_gpu":122,"env_ram":122,"env_deps":123,"category_tags":128,"github_topics":129,"view_count":23,"oss_zip_url":149,"oss_zip_packed_at":149,"status":16,"created_at":150,"updated_at":151,"faqs":152,"releases":187},2608,"jindongwang\u002Ftransferlearning","transferlearning","Transfer learning \u002F domain adaptation \u002F domain generalization \u002F multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习","transferlearning 是一个专注于迁移学习领域的综合性开源资源库，旨在为研究者和开发者提供“一站式”的知识与代码支持。它系统性地整理了迁移学习、领域自适应、领域泛化及多任务学习等前沿方向的核心内容，有效解决了该领域知识分散、复现困难以及缺乏统一基准的痛点。\n\n无论是刚入门的学生还是资深科研人员，都能在这里找到所需资源：从经典的学术论文、详细的理论综述，到可直接运行的代码实现和权威数据集评测基准。此外，项目还收录了相关硕博论文、知名学者列表、顶级会议期刊索引以及实际应用案例，甚至涵盖了大模型评估与联邦学习等关联领域的最新进展。\n\n作为被 CVPR、NeurIPS 等顶会广泛引用的资源，transferlearning 的最大亮点在于其极高的全面性与社区活跃度。它不仅帮助开发者快速上手算法，更为研究人员提供了验证新想法的坚实基线。如果你希望深入探索如何让 AI 模型在不同场景间灵活复用知识，这个宝库将是你不可或缺的得力助手。","[![Contributors][contributors-shield]][contributors-url]\n[![Forks][forks-shield]][forks-url]\n[![Stargazers][stars-shield]][stars-url]\n[![Issues][issues-shield]][issues-url]\n\n\u003Ch1 align=\"center\">\n  \u003Cbr>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjindongwang_transferlearning_readme_aaf044164570.jpg\" alt=\"Transfer Leanring\" width=\"500\">\n\u003C\u002Fh1>\n\n\u003Ch4 align=\"center\">Everything about Transfer Learning. 迁移学习.\u003C\u002Fh4>\n\n\u003Cp align=\"center\">\n  \u003Cstrong>\u003Ca href=\"#0papers-论文\">Papers\u003C\u002Fa>\u003C\u002Fstrong> •\n  \u003Cstrong>\u003Ca href=\"#1introduction-and-tutorials-简介与教程\">Tutorials\u003C\u002Fa>\u003C\u002Fstrong> •\n  \u003Ca href=\"#2transfer-learning-areas-and-papers-研究领域与相关论文\">Research areas\u003C\u002Fa> •\n  \u003Ca href=\"#3theory-and-survey-理论与综述\">Theory\u003C\u002Fa> •\n  \u003Ca href=\"#3theory-and-survey-理论与综述\">Survey\u003C\u002Fa> •\n  \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\">Code\u003C\u002Fa>\u003C\u002Fstrong> •\n  \u003Cstrong>\u003Ca href=\"#7datasets-and-benchmarks-数据集与评测结果\">Dataset & benchmark\u003C\u002Fa>\u003C\u002Fstrong>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"#6transfer-learning-thesis-硕博士论文\">Thesis\u003C\u002Fa> •\n  \u003Ca href=\"#5transfer-learning-scholars-著名学者\">Scholars\u003C\u002Fa> •\n  \u003Ca href=\"#8transfer-learning-challenges-迁移学习比赛\">Contests\u003C\u002Fa> •\n  \u003Ca href=\"#journals-and-conferences\">Journal\u002Fconference\u003C\u002Fa> •\n  \u003Ca href=\"#applications-迁移学习应用\">Applications\u003C\u002Fa> •\n  \u003Ca href=\"#other-resources-其他资源\">Others\u003C\u002Fa> •\n  \u003Ca href=\"#contributing-欢迎参与贡献\">Contributing\u003C\u002Fa>\n\u003C\u002Fp>\n\n**Widely used by top conferences and journals:** \n- Conferences: [[CVPR'22](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022W\u002FFaDE-TCV\u002Fhtml\u002FZhang_Segmenting_Across_Places_The_Need_for_Fair_Transfer_Learning_With_CVPRW_2022_paper.html)] [[NeurIPS'21](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F731b03008e834f92a03085ef47061c4a-Paper.pdf)] [[IJCAI'21](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03097)] [[ESEC\u002FFSE'20](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3368089.3409696)] [[IJCNN'20](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9207556)] [[ACMMM'18](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3240508.3240512)] [[ICME'19](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8784776\u002F)]\n- Journals: [[IEEE TKDE](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9782500\u002F)] [[ACM TIST](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3360309)] [[Information sciences](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0020025520308458)] [[Neurocomputing](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231221007025)] [[IEEE Transactions on Cognitive and Developmental Systems](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9659817)]\n\n```\n@Misc{transferlearning.xyz,\nhowpublished = {\\url{http:\u002F\u002Ftransferlearning.xyz}},   \ntitle = {Everything about Transfer Learning and Domain Adapation},  \nauthor = {Wang, Jindong and others}  \n}  \n```\n\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re) [![MIT License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) [![LICENSE](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Anti%20996-blue.svg)](https:\u002F\u002Fgithub.com\u002F996icu\u002F996.ICU\u002Fblob\u002Fmaster\u002FLICENSE) [![996.icu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flink-996.icu-red.svg)](https:\u002F\u002F996.icu) \n\nRelated Codes:\n  - Large language model evaluation: [[llm-eval](https:\u002F\u002Fllm-eval.github.io\u002F)]\n  - Large language model enhancement: [[llm-enhance](https:\u002F\u002Fllm-enhance.github.io\u002F)]\n  - Robust machine learning: [[robustlearn: robust machine learning](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Frobustlearn)]\n  - Semi-supervised learning: [[USB: unified semi-supervised learning benchmark](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSemi-supervised-learning)] | [[TorchSSL: a unified SSL library](https:\u002F\u002Fgithub.com\u002FTorchSSL\u002FTorchSSL)] \n  - LLM benchmark: [[PromptBench: adversarial robustness of prompts of LLMs](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fpromptbench)]\n  - Federated learning: [[PersonalizedFL: library for personalized federated learning](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FPersonalizedFL)]\n  - Activity recognition and machine learning [[Activity recognition](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Factivityrecognition)]｜[[Machine learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002FMachineLearning)]\n\n- - -\n\n**NOTE:** You can directly open the code in [Gihub Codespaces](https:\u002F\u002Fdocs.github.com\u002Fen\u002Fcodespaces\u002Fgetting-started\u002Fquickstart#introduction) on the web to run them without downloading! Also, try [github.dev](https:\u002F\u002Fgithub.dev\u002Fjindongwang\u002Ftransferlearning).\n\n## 0.Papers (论文)\n\n[Awesome transfer learning papers (迁移学习文章汇总)](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fdoc\u002Fawesome_paper.md)\n\n- [Paperweekly](http:\u002F\u002Fwww.paperweekly.site\u002Fcollections\u002F231\u002Fpapers): A website to recommend and read paper notes\n\n**Latest papers**: \n\n- By topic: [doc\u002Fawesome_papers.md](\u002Fdoc\u002Fawesome_paper.md)\n- By date: [doc\u002Fawesome_paper_date.md](\u002Fdoc\u002Fawesome_paper_date.md)\n\n*Updated at 2024-02-18:*\n\n- Simulations of Common Unsupervised Domain Adaptation Algorithms for Image Classification [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10694)]\n  - Unsupervised domain adaptaiton for image classification\n\n- Semantics-aware Test-time Adaptation for 3D Human Pose Estimation [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10724)]\n  - Test-time adaptation for3D human pose estimation\n\n- Transfer Learning of CATE with Kernel Ridge Regression [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11331)]\n  - Transfer learning with kernel ridge regression\n\n- Why Domain Generalization Fail? A View of Necessity and Sufficiency [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10716)] \n  - Analyze why domain generalization fail from the view of necessity and sufficiency\n\n\n*Updated at 2024-02-11:*\n\n- Beyond Batch Learning: Global Awareness Enhanced Domain Adaptation [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06272)]\n  - Global awareness for enhanced domain adaptation\n\n- - -\n\n## 1.Introduction and Tutorials (简介与教程)\n\nWant to quickly learn transfer learning？想尽快入门迁移学习？看下面的教程。\n\n- Books 书籍\n  - **Introduction to Transfer Learning: Algorithms and Practice** [[Buy or read](https:\u002F\u002Flink.springer.com\u002Fbook\u002F9789811975837)]\n  - **《迁移学习》（杨强）** [[Buy](https:\u002F\u002Fitem.jd.com\u002F12930984.html)] [[English version](https:\u002F\u002Fwww.cambridge.org\u002Fcore\u002Fbooks\u002Ftransfer-learning\u002FCCFFAFE3CDBC245047F1DEC71D9EF3C7)]\n  - **《迁移学习导论》(王晋东、陈益强著)** [[Homepage](http:\u002F\u002Fjd92.wang\u002Ftlbook)] [[Buy](https:\u002F\u002Fitem.jd.com\u002F13272157.html)]\n\n- Blogs 博客\n  - [Zhihu blogs - 知乎专栏《小王爱迁移》系列文章](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F130244395)\n\t\n- Video tutorials 视频教程\n  - Transfer learning 迁移学习:\n    - [Recent advance of transfer learning - 2022年最新迁移学习发展现状探讨](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1nY411E7Uc\u002F)\n    - [Definitions of transfer learning area - 迁移学习领域名词解释](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1fu411o7BW) [[Article](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F428097044)]\n    - [Transfer learning by Hung-yi Lee @ NTU - 台湾大学李宏毅的视频讲解(中文视频)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qD6iD4TFsdQ)\n  - Domain generalization 领域泛化：\n    - [IJCAI-ECAI'22 tutorial on domain generalization - 领域泛化tutorial](https:\u002F\u002Fdgresearch.github.io\u002F)\n    - [Domain generalization - 迁移学习新兴研究方向领域泛化](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1ro4y1S7dd\u002F)\n  - Domain adaptation 领域自适应：\n    - [Domain adaptation - 迁移学习中的领域自适应方法(中文)](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1T7411R75a\u002F) \n  \n\n- Brief introduction and slides 简介与ppt资料\n  - [Recent advance of transfer learning](https:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl16_aitime.pdf)\n  - [Domain generalization survey](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002FDGSurvey-ppt.pdf)\n  - [Brief introduction in Chinese](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002F%E8%BF%81%E7%A7%BB%E5%AD%A6%E4%B9%A0%E7%AE%80%E4%BB%8B.md)\n\t- [PPT (English)](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl03_transferlearning.pdf) | [PPT (中文)](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl08_tl_zh.pdf)\n  - 迁移学习中的领域自适应方法 Domain adaptation: [PDF](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl12_da.pdf) ｜ [Video on Bilibili](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1T7411R75a\u002F) | [Video on Youtube](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RbIsHNtluwQ&t=22s)\n  - Tutorial on transfer learning by Qiang Yang: [IJCAI'13](http:\u002F\u002Fijcai13.org\u002Ffiles\u002Ftutorial_slides\u002Ftd2.pdf) | [2016 version](http:\u002F\u002Fkddchina.org\u002Ffile\u002FIntroTL2016.pdf)\n\n- Talk is cheap, show me the code 动手教程、代码、数据 \n  - [Pytorch tutorial on transfer learning](https:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Ftransfer_learning_tutorial.html)\n\t- [Pytorch finetune](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FAlexNet_ResNet)\n\t- [DeepDA: a unified deep domain adaptation toolbox](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDA)\n\t- [DeepDG: a unified deep domain generalization toolbox](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)\n\t- [更多 More...](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode)\n\n- [Transfer Learning Scholars and Labs - 迁移学习领域的著名学者、代表工作及实验室介绍](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fscholar_TL.md)\n- [Negative transfer - 负迁移](https:\u002F\u002Fwww.zhihu.com\u002Fquestion\u002F66492194\u002Fanswer\u002F242870418)\n\n- - -\n\n## 2.Transfer Learning Areas and Papers (研究领域与相关论文)\n\n- [Survey](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#survey)\n- [Theory](#theory)\n- [Per-training\u002FFinetuning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#per-trainingfinetuning)\n- [Knowledge distillation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#knowledge-distillation)\n- [Traditional domain adaptation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#traditional-domain-adaptation)\n- [Deep domain adaptation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#deep-domain-adaptation)\n- [Domain generalization](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#domain-generalization)\n- [Source-free domain adaptation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#source-free-domain-adaptation)\n- [Multi-source domain adaptation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#multi-source-domain-adaptation)\n- [Heterogeneous transfer learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#heterogeneous-transfer-learning)\n- [Online transfer learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#online-transfer-learning)\n- [Zero-shot \u002F few-shot learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#zero-shot--few-shot-learning)\n- [Multi-task learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#multi-task-learning)\n- [Transfer reinforcement learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#transfer-reinforcement-learning)\n- [Transfer metric learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#transfer-metric-learning)\n- [Federated transfer learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#federated-transfer-learning)\n- [Lifelong transfer learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#lifelong-transfer-learning)\n- [Safe transfer learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#safe-transfer-learning)\n- [Transfer learning applications](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#transfer-learning-applications)\n\n- - -\n\n## 3.Theory and Survey (理论与综述)\n\nHere are some articles on transfer learning theory and survey.\n\n**Survey (综述文章)：**\n\n- 2023 Source-Free Unsupervised Domain Adaptation: A Survey [[arxiv](http:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00265)]\n- 2022 [Transfer Learning for Future Wireless Networks: A Comprehensive Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.07572)\n- 2022 [A Review of Deep Transfer Learning and Recent Advancements](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.09679)\n- 2022 [Transferability in Deep Learning: A Survey](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Ftransferability-in-deep-learning-a-survey), from Mingsheng Long in THU.\n- 2021 Domain generalization: IJCAI-21 [Generalizing to Unseen Domains: A Survey on Domain Generalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03097) | [知乎文章](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F354740610) | [微信公众号](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FDsoVDYqLB1N7gj9X5UnYqw)\n  - First survey on domain generalization\n  - 第一篇对Domain generalization (领域泛化)的综述\n- 2021 Vision-based activity recognition: [A Survey of Vision-Based Transfer Learning in Human Activity Recognition](https:\u002F\u002Fwww.mdpi.com\u002F2079-9292\u002F10\u002F19\u002F2412)\n- 2021 ICSAI [A State-of-the-Art Survey of Transfer Learning in Structural Health Monitoring](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9664171)\n- 2020 [Transfer learning: survey and classification](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-981-15-5345-5_13), Advances in Intelligent Systems and Computing. \n- 2020 迁移学习最新survey，来自中科院计算所庄福振团队，发表在Proceedings of the IEEE: [A Comprehensive Survey on Transfer Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.02685)\n- 2020 负迁移的综述：[Overcoming Negative Transfer: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.00909)\n- 2020 知识蒸馏的综述: [Knowledge Distillation: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05525)\n- 用transfer learning进行sentiment classification的综述：[A Survey of Sentiment Analysis Based on Transfer Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8746210) \n- 2019 一篇新survey：[Transfer Adaptation Learning: A Decade Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.04687)\n- 2018 一篇迁移度量学习的综述: [Transfer Metric Learning: Algorithms, Applications and Outlooks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.03944)\n- 2018 一篇最近的非对称情况下的异构迁移学习综述：[Asymmetric Heterogeneous Transfer Learning: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10834)\n- 2018 Neural style transfer的一个survey：[Neural Style Transfer: A Review](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.04058)\n- 2018 深度domain adaptation的一个综述：[Deep Visual Domain Adaptation: A Survey](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231218306684)\n- 2017 多任务学习的综述，来自香港科技大学杨强团队：[A survey on multi-task learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08114)\n- 2017 异构迁移学习的综述：[A survey on heterogeneous transfer learning](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1186\u002Fs40537-017-0089-0)\n- 2017 跨领域数据识别的综述：[Cross-dataset recognition: a survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.04396)\n- 2016 [A survey of transfer learning](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1gfgXLXT)。其中交代了一些比较经典的如同构、异构等学习方法代表性文章。\n- 2015 中文综述：[迁移学习研究进展](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1bpautob)\n- 2010 [A survey on transfer learning](http:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5288526\u002F)\n- Survey on applications - 应用导向的综述：\n\t- 视觉domain adaptation综述：[Visual Domain Adaptation: A Survey of Recent Advances](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1o8BR7Vc)\n\t- 迁移学习应用于行为识别综述：[Transfer Learning for Activity Recognition: A Survey](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1kVABOYr)\n\t- 迁移学习与增强学习：[Transfer Learning for Reinforcement Learning Domains: A Survey](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1slfr0w1)\n\t- 多个源域进行迁移的综述：[A Survey of Multi-source Domain Adaptation](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1eSGREF4)。\n\n**Theory （理论文章）:**\n\n- ICML-20 [Few-shot domain adaptation by causal mechanism transfer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.03497.pdf)\n\t- The first work on causal transfer learning\n\t- 日本理论组大佬Sugiyama的工作，causal transfer learning\n- CVPR-19 [Characterizing and Avoiding Negative Transfer](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.09751)\n\t- Characterizing and avoid negative transfer\n\t- 形式化并提出如何避免负迁移\n- ICML-20 [On Learning Language-Invariant Representations for Universal Machine Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.04510)\n  - Theory for universal machine translation\n  - 对统一机器翻译模型进行了理论论证\n- NIPS-06 [Analysis of Representations for Domain Adaptation](https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2976474)\n- ML-10 [A Theory of Learning from Different Domains](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs10994-009-5152-4)\n- NIPS-08 [Learning Bounds for Domain Adaptation](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F3212-learning-bounds-for-domain-adaptation)\n- COLT-09 [Domain adaptation: Learning bounds and algorithms](https:\u002F\u002Farxiv.org\u002Fabs\u002F0902.3430)\n- MMD paper：[A Hilbert Space Embedding for Distributions](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-540-75225-7_5) and [A Kernel Two-Sample Test](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv13\u002Fgretton12a.html)\n- Multi-kernel MMD paper: [Optimal kernel choice for large-scale two-sample tests](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4727-optimal-kernel-choice-for-large-scale-two-sample-tests)\n\n_ _ _\n\n## 4.Code (代码)\n\nUnified codebases for:\n- [Deep domain adaptation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDA)\n- [Deep domain generalization](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)\n- See all codes here: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode.\n\nMore: see [HERE](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode) and [HERE](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1MVuk95mMg4ecGyUAIG94vedF81HtWQAr?usp=sharing) for an instant run using Google's Colab.\n\n_ _ _\n\n## 5.Transfer Learning Scholars (著名学者)\n\nHere are some transfer learning scholars and labs.\n\n**全部列表以及代表工作性见[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fscholar_TL.md)** \n\nPlease note that this list is far not complete. A full list can be seen in [here](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fscholar_TL.md). Transfer learning is an active field. *If you are aware of some scholars, please add them here.*\n\n_ _ _\n\n## 6.Transfer Learning Thesis (硕博士论文)\n\nHere are some popular thesis on transfer learning.\n\n[这里](https:\u002F\u002Fpan.baidu.com\u002Fshare\u002Finit?surl=iuzZhHdumrD64-yx_VAybA), 提取码：txyz。\n\n- - -\n\n## 7.Datasets and Benchmarks (数据集与评测结果)\n\nPlease see [HERE](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdata) for the popular transfer learning **datasets and benchmark** results.\n\n[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdata)整理了常用的公开数据集和一些已发表的文章在这些数据集上的实验结果。\n\n- - -\n\n## 8.Transfer Learning Challenges (迁移学习比赛)\n\n- [Visual Domain Adaptation Challenge (VisDA)](http:\u002F\u002Fai.bu.edu\u002Fvisda-2018\u002F)\n\n- - -\n\n## Journals and Conferences\n\nSee [here](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fvenues.md) for a full list of related journals and conferences.\n\n- - -\n\n## Applications (迁移学习应用)\n\n- [Computer vision](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#computer-vision)\n- [Medical and healthcare](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#medical-and-healthcare)\n- [Natural language processing](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#natural-language-processing)\n- [Time series](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#time-series)\n- [Speech](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#speech)\n- [Multimedia](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#multimedia)\n- [Recommendation](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#recommendation)\n- [Human activity recognition](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#human-activity-recognition)\n- [Autonomous driving](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#autonomous-driving)\n- [Others](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#others)\n\nSee [HERE](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md) for transfer learning applications.\n\n迁移学习应用请见[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md)。\n\n- - -\n\n## Other Resources (其他资源)\n\n- Call for papers:\n  - [Advances in Transfer Learning: Theory, Algorithms, and Applications](https:\u002F\u002Fwww.frontiersin.org\u002Fresearch-topics\u002F21133\u002Fadvances-in-transfer-learning-theory-algorithms-and-applications), DDL: October 2021\n\n- Related projects:\n  - Salad: [A semi-supervised domain adaptation library](https:\u002F\u002Fdomainadaptation.org)\n\n- - -\n\n## Contributing (欢迎参与贡献)\n\nIf you are interested in contributing, please refer to [HERE](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md) for instructions in contribution.\n\n- - -\n\n### Copyright notice\n\n> ***[Notes]This Github repo can be used by following the corresponding licenses. I want to emphasis that it may contain some PDFs or thesis, which were downloaded by me and can only be used for academic purposes. The copyrights of these materials are owned by corresponding publishers or organizations. All this are for better academic research. If any of the authors or publishers have concerns, please contact me to delete or replace them.***\n\n[contributors-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[contributors-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fgraphs\u002Fcontributors\n[forks-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[forks-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fnetwork\u002Fmembers\n[stars-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[stars-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fstargazers\n[issues-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[issues-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\n[license-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[license-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmain\u002FLICENSE.txt\n","[![贡献者][contributors-shield]][contributors-url]\n[![复刻数][forks-shield]][forks-url]\n[![点赞数][stars-shield]][stars-url]\n[![问题数][issues-shield]][issues-url]\n\n\u003Ch1 align=\"center\">\n  \u003Cbr>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjindongwang_transferlearning_readme_aaf044164570.jpg\" alt=\"迁移学习\" width=\"500\">\n\u003C\u002Fh1>\n\n\u003Ch4 align=\"center\">关于迁移学习的一切。迁移学习。\u003C\u002Fh4>\n\n\u003Cp align=\"center\">\n  \u003Cstrong>\u003Ca href=\"#0papers-论文\">论文\u003C\u002Fa>\u003C\u002Fstrong> •\n  \u003Cstrong>\u003Ca href=\"#1introduction-and-tutorials-简介与教程\">教程\u003C\u002Fa>\u003C\u002Fstrong> •\n  \u003Ca href=\"#2transfer-learning-areas-and-papers-研究领域与相关论文\">研究领域\u003C\u002Fa> •\n  \u003Ca href=\"#3theory-and-survey-理论与综述\">理论\u003C\u002Fa> •\n  \u003Ca href=\"#3theory-and-survey- 理论与综述\">综述\u003C\u002Fa> •\n  \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\">代码\u003C\u002Fa>\u003C\u002Fstrong> •\n  \u003Cstrong>\u003Ca href=\"#7datasets-and-benchmarks-数据集与评测结果\">数据集与基准测试\u003C\u002Fa>\u003C\u002Fstrong>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n  \u003Ca href=\"#6transfer-learning-thesis-硕博士论文\">论文\u003C\u002Fa> •\n  \u003Ca href=\"#5transfer-learning-scholars-著名学者\">学者\u003C\u002Fa> •\n  \u003Ca href=\"#8transfer-learning-challenges-迁移学习比赛\">比赛\u003C\u002Fa> •\n  \u003Ca href=\"#journals-and-conferences\">期刊\u002F会议\u003C\u002Fa> •\n  \u003Ca href=\"#applications- 迁移学习应用\">应用\u003C\u002Fa> •\n  \u003Ca href=\"#other-resources-其他资源\">其他\u003C\u002Fa> •\n  \u003Ca href=\"#contributing-欢迎参与贡献\">贡献\u003C\u002Fa>\n\u003C\u002Fp>\n\n**被顶级会议和期刊广泛使用：**\n- 会议：[[CVPR'22](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022W\u002FFaDE-TCV\u002Fhtml\u002FZhang_Segmenting_Across_Places_The_Need_for_Fair_Transfer_Learning_With_CVPRW_2022_paper.html)] [[NeurIPS'21](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F731b03008e834f92a03085ef47061c4a-Paper.pdf)] [[IJCAI'21](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03097)] [[ESEC\u002FFSE'20](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3368089.3409696)] [[IJCNN'20](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9207556)] [[ACMMM'18](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3240508.3240512)] [[ICME'19](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8784776\u002F)]\n- 期刊：[[IEEE TKDE](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9782500\u002F)] [[ACM TIST](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3360309)] [[Information sciences](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0020025520308458)] [[Neurocomputing](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231221007025)] [[IEEE Transactions on Cognitive and Developmental Systems](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9659817)]\n\n```\n@Misc{transferlearning.xyz,\nhowpublished = {\\url{http:\u002F\u002Ftransferlearning.xyz}},   \ntitle = {关于迁移学习和领域适应的一切},  \nauthor = {王金东等}  \n}  \n```\n\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re) [![MIT 许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) [![LICENSE](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Anti%20996-blue.svg)](https:\u002F\u002Fgithub.com\u002F996icu\u002F996.ICU\u002Fblob\u002Fmaster\u002FLICENSE) [![996.icu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flink-996.icu-red.svg)](https:\u002F\u002F996.icu) \n\n相关代码：\n  - 大型语言模型评估：[[llm-eval](https:\u002F\u002Fllm-eval.github.io\u002F)]\n  - 大型语言模型增强：[[llm-enhance](https:\u002F\u002Fllm-enhance.github.io\u002F)]\n  - 鲁棒机器学习：[[robustlearn: 鲁棒机器学习](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Frobustlearn)]\n  - 半监督学习：[[USB: 统一的半监督学习基准](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSemi-supervised-learning)] | [[TorchSSL: 统一的 SSL 库](https:\u002F\u002Fgithub.com\u002FTorchSSL\u002FTorchSSL)] \n  - LLM 基准测试：[[PromptBench: LLM 提示词的对抗鲁棒性](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fpromptbench)]\n  - 联邦学习：[[PersonalizedFL: 个性化联邦学习库](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FPersonalizedFL)]\n  - 活动识别与机器学习 [[Activity recognition](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Factivityrecognition)]｜[[Machine learning](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002FMachineLearning)]\n\n- - -\n\n**注意：** 您可以直接在网页上通过 [Gihub Codespaces](https:\u002F\u002Fdocs.github.com\u002Fen\u002Fcodespaces\u002Fgetting-started\u002Fquickstart#introduction) 打开代码并运行，无需下载！此外，也可以尝试 [github.dev](https:\u002F\u002Fgithub.dev\u002Fjindongwang\u002Ftransferlearning)。\n\n## 0.论文\n\n[精彩的迁移学习论文汇总](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fdoc\u002Fawesome_paper.md)\n\n- [Paperweekly](http:\u002F\u002Fwww.paperweekly.site\u002Fcollections\u002F231\u002Fpapers)：一个推荐和阅读论文笔记的网站\n\n**最新论文：**\n\n- 按主题：[doc\u002Fawesome_papers.md](\u002Fdoc\u002Fawesome_paper.md)\n- 按日期：[doc\u002Fawesome_paper_date.md](\u002Fdoc\u002Fawesome_paper_date.md)\n\n*更新于 2024 年 2 月 18 日：*\n\n- 图像分类中常见无监督领域适应算法的模拟 [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10694)]\n  - 用于图像分类的无监督领域适应\n\n- 语义感知的测试时适应用于 3D 人体姿态估计 [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10724)]\n  - 用于 3D 人体姿态估计的测试时适应\n\n- 使用核岭回归进行 CATE 的迁移学习 [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.11331)]\n  - 使用核岭回归进行迁移学习\n\n- 为什么领域泛化会失败？从必要性和充分性的角度分析 [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.10716)]\n  - 从必要性和充分性的角度分析领域泛化失败的原因\n\n\n*更新于 2024 年 2 月 11 日：*\n\n- 超越批处理学习：全局意识增强的领域适应 [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06272)]\n  - 全局意识用于增强领域适应\n\n- - -\n\n## 1.简介与教程\n\n想尽快入门迁移学习？看下面的教程。\n\n- 书籍\n  - **《迁移学习导论：算法与实践》** [[购买或阅读](https:\u002F\u002Flink.springer.com\u002Fbook\u002F9789811975837)]\n  - **《迁移学习》（杨强）** [[购买](https:\u002F\u002Fitem.jd.com\u002F12930984.html)] [[英文版](https:\u002F\u002Fwww.cambridge.org\u002Fcore\u002Fbooks\u002Ftransfer-learning\u002FCCFFAFE3CDBC245047F1DEC71D9EF3C7)]\n  - **《迁移学习导论》(王晋东、陈益强著)** [[主页](http:\u002F\u002Fjd92.wang\u002Ftlbook)] [[购买](https:\u002F\u002Fitem.jd.com\u002F13272157.html)]\n\n- 博客\n  - [知乎专栏《小王爱迁移》系列文章](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F130244395)\n\t\n- 视频教程\n  - 迁移学习:\n    - [2022年最新迁移学习发展现状探讨](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1nY411E7Uc\u002F)\n    - [迁移学习领域名词解释](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1fu411o7BW) [[文章](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F428097044)]\n    - [台湾大学李宏毅的视频讲解(中文视频)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=qD6iD4TFsdQ)\n  - 领域泛化：\n    - [IJCAI-ECAI'22关于领域泛化的tutorial](https:\u002F\u002Fdgresearch.github.io\u002F)\n    - [迁移学习新兴研究方向领域泛化](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1ro4y1S7dd\u002F)\n  - 领域自适应：\n    - [迁移学习中的领域自适应方法(中文)](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1T7411R75a\u002F) \n  \n\n- 简介与ppt资料\n  - [迁移学习最新进展](https:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl16_aitime.pdf)\n  - [领域泛化综述](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002FDGSurvey-ppt.pdf)\n  - [中文简介](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002F%E8%BF%81%E7%A7%BB%E5%AD%A6%E4%B9%A0%E7%AE%80%E4%BB%8B.md)\n\t- [PPT (英文)](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl03_transferlearning.pdf) | [PPT (中文)](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl08_tl_zh.pdf)\n  - 迁移学习中的领域自适应方法 Domain adaptation: [PDF](http:\u002F\u002Fjd92.wang\u002Fassets\u002Ffiles\u002Fl12_da.pdf) ｜ [Bilibili视频](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002FBV1T7411R75a\u002F) | [Youtube视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RbIsHNtluwQ&t=22s)\n  - 杨强的迁移学习教程: [IJCAI'13](http:\u002F\u002Fijcai13.org\u002Ffiles\u002Ftutorial_slides\u002Ftd2.pdf) | [2016年版本](http:\u002F\u002Fkddchina.org\u002Ffile\u002FIntroTL2016.pdf)\n\n- 动手教程、代码、数据 \n  - [Pytorch迁移学习教程](https:\u002F\u002Fpytorch.org\u002Ftutorials\u002Fbeginner\u002Ftransfer_learning_tutorial.html)\n\t- [Pytorch微调](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FAlexNet_ResNet)\n\t- [DeepDA: 一个统一的深度领域自适应工具箱](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDA)\n\t- [DeepDG: 一个统一的深度领域泛化工具箱](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)\n\t- [更多...](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode)\n\n- [迁移学习领域的著名学者、代表工作及实验室介绍](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fscholar_TL.md)\n- [负迁移](https:\u002F\u002Fwww.zhihu.com\u002Fquestion\u002F66492194\u002Fanswer\u002F242870418)\n\n- - -\n\n## 2.研究领域与相关论文\n\n- [综述](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#survey)\n- [理论](#theory)\n- [预训练\u002F微调](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#per-trainingfinetuning)\n- [知识蒸馏](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#knowledge-distillation)\n- [传统领域自适应](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#traditional-domain-adaptation)\n- [深度领域自适应](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#deep-domain-adaptation)\n- [领域泛化](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#domain-generalization)\n- [无源领域自适应](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#source-free-domain-adaptation)\n- [多源领域自适应](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#multi-source-domain-adaptation)\n- [异构迁移学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#heterogeneous-transfer-learning)\n- [在线迁移学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#online-transfer-learning)\n- [零样本\u002F少样本学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#zero-shot--few-shot-learning)\n- [多任务学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#multi-task-learning)\n- [迁移强化学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#transfer-reinforcement-learning)\n- [迁移度量学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#transfer-metric-learning)\n- [联邦迁移学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#federated-transfer-learning)\n- [终身迁移学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#lifelong-transfer-learning)\n- [安全迁移学习](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#safe-transfer-learning)\n- [迁移学习的应用](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fawesome_paper.md#transfer-learning-applications)\n\n- - -\n\n## 3.理论与综述\n\n以下是一些关于迁移学习理论和综述的文章。\n\n**综述文章：**\n\n- 2023年 无源无监督域适应：综述 [[arxiv](http:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00265)]\n- 2022年 [面向未来无线网络的迁移学习：全面综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.07572)\n- 2022年 [深度迁移学习及其最新进展综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.09679)\n- 2022年 [深度学习中的可迁移性：综述](https:\u002F\u002Fpaperswithcode.com\u002Fpaper\u002Ftransferability-in-deep-learning-a-survey)，由清华大学龙明盛团队发表。\n- 2021年 领域泛化：IJCAI-21 [面向未见领域的泛化：领域泛化的综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.03097) | [知乎文章](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F354740610) | [微信公众号](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FDsoVDYqLB1N7gj9X5UnYqw)\n  - 第一篇关于领域泛化的综述\n  - 第一篇对Domain generalization (领域泛化)的综述\n- 2021年 基于视觉的活动识别：[基于视觉的迁移学习在人体活动识别中的综述](https:\u002F\u002Fwww.mdpi.com\u002F2079-9292\u002F10\u002F19\u002F2412)\n- 2021年 ICSAI [结构健康监测中迁移学习的最新进展综述](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9664171)\n- 2020年 [迁移学习：综述与分类](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-981-15-5345-5_13)，收录于《智能系统与计算进展》。\n- 2020年 来自中科院计算所庄福振团队的迁移学习最新综述，发表在《IEEE会刊》上：[迁移学习综合综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.02685)\n- 2020年 负迁移的综述：[克服负迁移：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.00909)\n- 2020年 知识蒸馏的综述：[知识蒸馏：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05525)\n- 使用迁移学习进行情感分类的综述：[基于迁移学习的情感分析综述](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8746210)\n- 2019年 一篇新综述：[迁移适应学习：十年综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.04687)\n- 2018年 迁移度量学习的综述：[迁移度量学习：算法、应用及展望](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.03944)\n- 2018年 最近关于非对称情况下异构迁移学习的综述：[非对称异构迁移学习：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10834)\n- 2018年 神经风格迁移的综述：[神经风格迁移：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.04058)\n- 2018年 深度域适应的综述：[深度视觉域适应：综述](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0925231218306684)\n- 2017年 来自香港科技大学杨强团队的多任务学习综述：[多任务学习综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08114)\n- 2017年 异构迁移学习的综述：[异构迁移学习综述](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1186\u002Fs40537-017-0089-0)\n- 2017年 跨领域数据识别的综述：[跨数据集识别：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.04396)\n- 2016年 [迁移学习综述](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1gfgXLXT)。其中介绍了一些比较经典的同构、异构等学习方法代表性文章。\n- 2015年 中文综述：[迁移学习研究进展](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1bpautob)\n- 2010年 [迁移学习综述](http:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F5288526\u002F)\n- 应用导向的综述：\n\t- 视觉域适应综述：[视觉域适应：最新进展综述](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1o8BR7Vc)\n\t- 迁移学习应用于行为识别的综述：[用于行为识别的迁移学习：综述](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1kVABOYr)\n\t- 迁移学习与强化学习：[用于强化学习领域的迁移学习：综述](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1slfr0w1)\n\t- 多个源域进行迁移的综述：[多源域适应综述](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1eSGREF4)。\n\n**理论文章：**\n\n- ICML-20 [基于因果机制迁移的少样本域适应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.03497.pdf)\n\t- 因果迁移学习的开创性工作\n\t- 日本理论界大佬Sugiyama的研究成果，涉及因果迁移学习\n- CVPR-19 [负迁移的表征与规避](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.09751)\n\t- 对负迁移的表征及规避方法进行了研究\n\t- 形式化并提出了如何避免负迁移的方法\n- ICML-20 [关于学习语言不变表示以实现通用机器翻译的理论](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.04510)\n  - 通用机器翻译模型的理论基础\n  - 对统一机器翻译模型进行了理论论证\n- NIPS-06 [针对域适应的表示分析](https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2976474)\n- ML-10 [不同领域间学习的理论](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs10994-009-5152-4)\n- NIPS-08 [域适应的学习界限](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F3212-learning-bounds-for-domain-adaptation)\n- COLT-09 [域适应：学习界限与算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F0902.3430)\n- MMD相关论文：[分布的希尔伯特空间嵌入](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-540-75225-7_5) 和 [核两样本检验](http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv13\u002Fgretton12a.html)\n- 多核MMD相关论文：[大规模两样本检验的最佳核选择](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F4727-optimal-kernel-choice-for-large-scale-two-sample-tests)\n\n_ _ _\n\n## 4.代码\n\n统一的代码库包括：\n- [深度域适应](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDA)\n- [深度领域泛化](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode\u002FDeepDG)\n- 更多代码请参见：https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode。\n\n更多内容：请参阅 [这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Ftree\u002Fmaster\u002Fcode) 和 [这里](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1MVuk95mMg4ecGyUAIG94vedF81HtWQAr?usp=sharing) ，使用Google的Colab即可快速运行。\n\n_ _ _\n\n## 5.迁移学习学者\n\n以下列出一些著名的迁移学习学者及其研究团队。\n\n**完整列表及代表作请见[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fscholar_TL.md)** \n\n请注意，此列表远未完整。完整的学者名单可在 [这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fscholar_TL.md) 查看。迁移学习是一个活跃的研究领域。*如果您了解其他学者，请在此处补充。*\n\n_ _ _\n\n## 6.迁移学习硕博士论文\n\n以下是一些关于迁移学习的热门学位论文。\n\n[这里](https:\u002F\u002Fpan.baidu.com\u002Fshare\u002Finit?surl=iuzZhHdumrD64-yx_VAybA)，提取码：txyz。\n\n- - -\n\n## 7.数据集与评测基准\n\n流行的迁移学习 **数据集及评测结果** 请参见 [这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdata)。\n\n[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdata)整理了常用的公开数据集，以及一些已发表文章在这些数据集上的实验结果。\n\n- - -\n\n## 8.迁移学习挑战赛\n\n- [视觉域适应挑战赛（VisDA）](http:\u002F\u002Fai.bu.edu\u002Fvisda-2018\u002F)\n\n- - -\n\n## 期刊与会议\n\n相关期刊和会议的完整列表请参见 [这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Fvenues.md)。\n\n- - -\n\n## 应用（迁移学习应用）\n\n- [计算机视觉](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#computer-vision)\n- [医疗与健康](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#medical-and-healthcare)\n- [自然语言处理](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#natural-language-processing)\n- [时间序列](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#time-series)\n- [语音](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#speech)\n- [多媒体](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#multimedia)\n- [推荐系统](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#recommendation)\n- [人类活动识别](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#human-activity-recognition)\n- [自动驾驶](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#autonomous-driving)\n- [其他](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md#others)\n\n迁移学习的应用请参见[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002Fdoc\u002Ftransfer_learning_application.md)。\n\n- - -\n\n## 其他资源\n\n- 征稿启事：\n  - [迁移学习的进展：理论、算法与应用](https:\u002F\u002Fwww.frontiersin.org\u002Fresearch-topics\u002F21133\u002Fadvances-in-transfer-learning-theory-algorithms-and-applications)，截稿日期：2021年10月\n\n- 相关项目：\n  - Salad：[一个半监督域适应库](https:\u002F\u002Fdomainadaptation.org)\n\n- - -\n\n## 贡献说明（欢迎参与贡献）\n\n如果您有兴趣参与贡献，请参阅[这里](https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmaster\u002FCONTRIBUTING.md)，以获取详细的贡献指南。\n\n- - -\n\n### 版权声明\n\n> ***[注]本GitHub仓库可根据相应许可证使用。需要强调的是，其中可能包含一些由本人下载的PDF或论文，这些资料仅用于学术研究目的。相关材料的版权归相应出版社或机构所有。所有内容均旨在促进学术研究。如有任何作者或出版方对此有异议，请联系我以便删除或替换相关内容。***\n\n[contributors-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[contributors-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fgraphs\u002Fcontributors\n[forks-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[forks-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fnetwork\u002Fmembers\n[stars-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[stars-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fstargazers\n[issues-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[issues-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\n[license-shield]: https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fjindongwang\u002Ftransferlearning.svg?style=for-the-badge\n[license-url]: https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fblob\u002Fmain\u002FLICENSE.txt","# Transfer Learning 快速上手指南\n\n本指南基于 `transferlearning` 开源项目，旨在帮助开发者快速入门迁移学习（Transfer Learning）与领域自适应（Domain Adaptation）。该项目汇集了丰富的论文、教程、代码库及数据集。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows (推荐 Linux)\n*   **Python 版本**：Python 3.7+\n*   **核心依赖**：\n    *   PyTorch (推荐 1.8+)\n    *   NumPy\n    *   Scikit-learn\n    *   Pandas\n*   **硬件建议**：若运行深度学习模型（如 DeepDA, DeepDG），建议配备 NVIDIA GPU 及 CUDA 环境。\n\n**前置依赖安装命令：**\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\npip install numpy scikit-learn pandas matplotlib\n```\n> **提示**：国内用户可使用清华源加速安装：\n> `pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003Cpackage_name>`\n\n## 2. 安装步骤\n\n该项目主要是一个资源汇总与代码集合库，无需通过 `pip` 安装单一包，而是直接克隆仓库以获取最新的代码工具箱（如 DeepDA, DeepDG）和教程代码。\n\n**克隆仓库：**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning.git\ncd transferlearning\n```\n\n**可选：在线免安装运行**\n如果您不想在本地配置环境，可以直接在浏览器中打开并运行代码：\n*   **GitHub Codespaces**: 点击仓库页面的 \"Code\" -> \"Codespaces\" -> \"Create codespace on master\"。\n*   **github.dev**: 在浏览器地址栏将 URL 中的 `github.com` 替换为 `github.dev` 即可进入在线编辑器。\n\n## 3. 基本使用\n\n项目提供了多个子模块，涵盖从基础的微调（Finetune）到统一的领域自适应工具箱。以下是两个最常用的快速示例。\n\n### 示例 A：基于 PyTorch 的模型微调 (Finetune)\n适用于图像分类任务的迁移学习入门。\n\n1.  **进入代码目录**：\n    ```bash\n    cd code\u002FAlexNet_ResNet\n    ```\n2.  **查看并运行示例**：\n    该目录通常包含基于预训练模型（如 ResNet, AlexNet）进行微调的脚本。假设存在 `main.py` 或类似入口文件：\n    ```bash\n    # 运行微调示例 (具体参数请参考目录下的 README 或脚本注释)\n    python main.py --model resnet50 --dataset office31\n    ```\n    *注：首次运行会自动下载预训练权重和数据集。*\n\n### 示例 B：使用统一领域自适应工具箱 (DeepDA)\n`DeepDA` 是一个统一的深度领域自适应工具箱，集成了多种经典算法（如 DANN, DAN, JAN 等）。\n\n1.  **进入工具箱目录**：\n    ```bash\n    cd code\u002FDeepDA\n    ```\n2.  **安装工具箱特定依赖**（如有 `requirements.txt`）：\n    ```bash\n    pip install -r requirements.txt\n    ```\n3.  **运行算法示例**：\n    以下命令演示如何运行一个典型的无监督领域自适应任务（例如从 Amazon 数据集迁移到 Webcam 数据集）：\n    ```bash\n    python train.py --src_domain amazon --tgt_domain webcam --method DANN\n    ```\n    *   `--method`: 可替换为其他算法名称，如 `DAN`, `JAN`, `ADDA` 等。\n    *   详细参数列表请运行 `python train.py --help` 查看。\n\n### 示例 C：使用领域泛化工具箱 (DeepDG)\n针对未知目标域的泛化任务：\n\n```bash\ncd code\u002FDeepDG\npython train.py --dataset PACS --model resnet18 --algorithm ERM\n```\n\n---\n**更多资源**：\n*   **完整代码库**：访问 `code\u002F` 目录下查看更多算法实现。\n*   **最新论文**：查看 `doc\u002Fawesome_paper.md` 获取按主题和日期整理的论文列表。\n*   **中文教程**：参考项目首页链接中的《迁移学习导论》书籍及知乎专栏《小王爱迁移》系列文章。","某医疗 AI 团队试图利用公开的大型皮肤癌图像数据集训练模型，并将其应用到一家新医院特有的低分辨率、不同光照条件的临床影像诊断中。\n\n### 没有 transferlearning 时\n- **数据冷启动困难**：新医院仅有少量标注病例，从头训练深度神经网络导致模型严重过拟合，无法收敛。\n- **领域分布差异大**：直接套用公开数据集训练的模型，因设备成像风格差异（域偏移），在本地测试集上准确率暴跌至 40% 以下。\n- **研发周期漫长**：团队需手动复现多篇迁移学习论文代码来尝试适配，缺乏统一基准，试错成本极高且耗时数周。\n- **理论落地脱节**：面对领域自适应（Domain Adaptation）等复杂概念，缺乏系统的教程和权威论文索引，算法选型盲目。\n\n### 使用 transferlearning 后\n- **小样本高效迁移**：直接调用库中成熟的领域自适应算法（如 DAN、DANN），利用少量本地数据即可将源域知识有效迁移，模型快速收敛。\n- **消除域间偏差**：应用库中预置的域泛化策略，显著降低了不同医疗设备间的特征分布差异，本地诊断准确率提升至 85% 以上。\n- **开箱即用的代码基线**：基于提供的标准化代码库和评测基准，团队在一天内完成了多种算法的对比实验，大幅缩短研发路径。\n- **前沿理论一站式获取**：通过整理的顶会论文、综述及学者资源，迅速锁定最适合医疗影像的最新技术路线，避免重复造轮子。\n\ntransferlearning 通过整合从理论到代码的全栈资源，将跨领域模型落地的门槛从“科研级探索”降低为“工程级应用”，极大加速了 AI 在数据稀缺场景下的价值释放。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjindongwang_transferlearning_aaf04416.jpg","jindongwang","Jindong Wang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjindongwang_e681bbd4.jpg","Senior Researcher at Microsoft Research. Research interest: robust machine learning, transfer learning, out-of-distribution generalization, general ML.","@microsoft","Beijing, China","jindongwang@outlook.com","jd92wang","http:\u002F\u002Fwww.jd92.wang","https:\u002F\u002Fgithub.com\u002Fjindongwang",[86,90,94,98,102,106,110,114],{"name":87,"color":88,"percentage":89},"Python","#3572A5",86.4,{"name":91,"color":92,"percentage":93},"MATLAB","#e16737",4.1,{"name":95,"color":96,"percentage":97},"Jupyter Notebook","#DA5B0B",3.5,{"name":99,"color":100,"percentage":101},"Shell","#89e051",3.1,{"name":103,"color":104,"percentage":105},"Makefile","#427819",1.4,{"name":107,"color":108,"percentage":109},"Cuda","#3A4E3A",1,{"name":111,"color":112,"percentage":113},"C++","#f34b7d",0.3,{"name":115,"color":116,"percentage":117},"CMake","#DA3434",0.2,14318,3842,"2026-04-03T05:10:12","MIT","未说明",{"notes":124,"python":122,"dependencies":125},"该项目主要是一个迁移学习领域的资源汇总（论文、教程、数据集），而非单一的独立软件工具。具体的运行环境需求取决于用户选择运行的子项目代码（如 DeepDA, DeepDG 等），这些代码通常基于 PyTorch。README 建议可以直接在 GitHub Codespaces 或 github.dev 中在线运行代码，无需本地配置环境。",[126,127],"PyTorch (提及用于教程和代码库)","未明确列出具体版本或其他库",[13],[67,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148],"domain-adaptation","transfer-learning","survey","deep-learning","generalization","few-shot","tutorial-code","theory","papers","few-shot-learning","meta-learning","domain-generalization","representation-learning","unsupervised-learning","machine-learning","self-supervised-learning","paper","style-transfer","domain-adaption",null,"2026-03-27T02:49:30.150509","2026-04-06T05:17:22.283237",[153,158,163,168,173,178,182],{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},12087,"为什么复现 Office-31 数据集上的 AlexNet 微调结果与论文报告的不一致？","主要原因在于评估协议（Protocol）不同。许多经典论文（如 DDC, DAN, JAN 等）使用的是“下采样”协议（down-sample），即每个类别仅选取部分样本（如源域和目标域各 20 个或 8 个样本）进行训练和测试，此时 AlexNet 准确率约为 61%。而本仓库代码默认使用“全量训练”协议（full-training），即使用整个域的所有数据。由于协议不同，直接对比数值会产生误解。此外，TensorFlow 与 Caffe 在卷积填充（Conv paddings）、LRN 层参数以及数据预处理流程（如必须严格使用 cv2.resize 而非 tf.image）上的差异也会显著影响结果。","https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\u002F23",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},12088,"为什么我运行的 ERM（经验风险最小化）基准模型效果比一些改进的领域泛化算法还要好？","这是正常现象。根据《In Search of Lost Domain Generalization》等研究指出，许多复杂的领域泛化方法相对于简单的 ERM 基线并没有显著提升，甚至有时不如 ERM。此外，仓库中公开的结果可能是早期的实验数据，后续代码在数据增强方式和训练过程上可能有所改进，导致新运行的 ERM 结果更高。建议参考最新的实验结果截图，并注意验证集划分和环境差异可能导致的小幅波动。","https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\u002F286",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},12089,"在训练过程中 Coral_loss 始终为 0 是什么原因？","CORAL 损失基于特征的二阶统计量（协方差矩阵），如果批量大小（batch size）过小，可能导致计算不稳定或结果为 0。建议将 batch size 增大（例如设置为 128 或 256）再尝试运行。如果问题依旧，请检查是否使用了最新版本的代码，因为维护者曾重写 DCORAL 和 DDC 的代码以修复相关实现问题。","https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\u002F97",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},12090,"DANN 代码运行时出现维度不匹配错误（output shape [1, 28, 28] doesn't match [3, 28, 28]）怎么办？","该错误通常由图像通道数不匹配引起。Office+Caltech 原始数据集是彩色图像（3 通道），而代码中可能包含了 `transforms.Grayscale()` 将图像转换为灰度（1 通道），导致后续网络输入维度冲突。解决方法是注释掉数据处理部分的 `transforms.Grayscale()`。如果注释后仍报错，请确保 PyTorch 版本兼容性（如使用 PyTorch 1.3 和 Python 3.7 环境），并检查数据加载器是否正确输出了 3 通道图像。","https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\u002F118",{"id":174,"question_zh":175,"answer_zh":176,"source_url":177},12091,"在无标签的目标域上进行域自适应训练时，应该保存哪个 epoch 的模型？","如果在训练过程中目标域是有标签的（用于验证），应当保存测试准确率（test accuracy）最高的那个 epoch 的模型。如果在实际应用中目标域完全无标签，通常无法直接通过目标域准确率来选择模型，此时一般依据源域的验证集表现，或者采用早停法（Early Stopping）观察源域损失收敛情况来决定保存点。","https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\u002F201",{"id":179,"question_zh":180,"answer_zh":181,"source_url":162},12092,"DeepDG 任务中，针对不同留一法（leave-one-domain-out）设置（如 PACS 中的 P->A 或 A->C），可以使用不同的超参数吗？","可以且建议采取不同的超参数。在领域泛化任务中，通常将每个留一法任务（例如从特定源域组合迁移到特定目标域）视为相互独立的单独任务。因此，针对每个具体任务调整学习率、Epoch 数或其他超参数是正常的做法，以获得该特定迁移场景下的最优性能。",{"id":183,"question_zh":184,"answer_zh":185,"source_url":186},12093,"运行 DeepDG 代码时，随着 epoch 增加每次迭代时间变长最终导致崩溃，如何解决？","这种情况通常与显存泄漏或数据加载机制有关。虽然具体案例中未给出确切的代码修复，但常见原因包括：未正确释放中间变量、数据增强操作在循环中累积、或 DataLoader 的 num_workers 设置不当。建议检查代码中是否有张量未及时 detach 或 cpu()，尝试减少 DataLoader 的 worker 数量，或监控显存使用情况以定位泄漏点。同时确认 `test_envs` 索引是否正确对应数据集（如 0 对应 Art, 2 对应 Product），避免索引错误导致的异常行为。","https:\u002F\u002Fgithub.com\u002Fjindongwang\u002Ftransferlearning\u002Fissues\u002F247",[]]