[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-FLHonker--Awesome-Knowledge-Distillation":3,"tool-FLHonker--Awesome-Knowledge-Distillation":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":79,"stars":81,"forks":82,"last_commit_at":83,"license":79,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":91,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":98,"updated_at":99,"faqs":100,"releases":131},3659,"FLHonker\u002FAwesome-Knowledge-Distillation","Awesome-Knowledge-Distillation","Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。","Awesome-Knowledge-Distillation 是一个专注于“知识蒸馏”技术的开源论文合集，旨在系统性地整理和分类该领域从 2014 年至 2021 年的核心研究成果。在深度学习中，大型模型往往性能卓越但计算成本高昂，难以部署在资源受限的设备上。知识蒸馏技术通过将大模型（教师）的知识迁移给小模型（学生），有效解决了模型压缩与加速的难题，让轻量级模型也能具备接近大模型的性能。\n\n这份资源库特别适合人工智能研究人员、算法工程师以及对模型优化感兴趣的开发者使用。它不仅仅是一份简单的文献列表，更独特地将数百篇论文按技术路径进行了细致分类，涵盖了从基于输出逻辑（Logits）、中间层特征、图结构知识，到结合生成对抗网络（GAN）、元学习、无数据蒸馏等前沿方向。此外，它还收录了知识蒸馏在自然语言处理、推荐系统及模型量化剪枝等具体场景的应用案例。无论是希望快速了解领域全貌的新手，还是寻找特定技术突破点的资深专家，都能从中高效获取有价值的学术参考，助力构建更高效、更紧凑的深度学习模型。","# Awesome Knowledge-Distillation\n\n\n![counter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber-658-green) \n[![star](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FFLHonker\u002FAwesome-Knowledge-Distillation?label=star&style=social)](https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation)\n\n- [Awesome Knowledge-Distillation](#awesome-knowledge-distillation)\n  - [Different forms of knowledge](#different-forms-of-knowledge)\n    - [Knowledge from logits](#knowledge-from-logits)\n    - [Knowledge from intermediate layers](#knowledge-from-intermediate-layers)\n    - [Graph-based](#graph-based)\n    - [Mutual Information & Online Learning](#mutual-information--online-learning)\n    - [Self-KD](#self-kd)\n    - [Structural Knowledge](#structural-knowledge)\n    - [Privileged Information](#privileged-information)\n  - [KD + GAN](#kd--gan)\n  - [KD + Meta-learning](#kd--meta-learning)\n  - [Data-free KD](#data-free-kd)\n  - [KD + AutoML](#kd--automl)\n  - [KD + RL](#kd--rl)\n  - [KD + Self-supervised](#kd--self-supervised)\n  - [Multi-teacher and Ensemble KD](#multi-teacher-and-ensemble-kd)\n    - [Knowledge Amalgamation（KA) - zju-VIPA](#knowledge-amalgamationka---zju-vipa)\n  - [Cross-modal \u002F DA \u002F Incremental Learning](#cross-modal--da--incremental-learning)\n  - [Application of KD](#application-of-kd)\n    - [for NLP & Data-Mining](#for-nlp--data-mining)\n    - [for RecSys](#for-recsys)\n  - [Model Pruning or Quantization](#model-pruning-or-quantization)\n  - [Beyond](#beyond)\n  - [Distiller Tools](#distiller-tools)\n\n## Different forms of knowledge\n\n### Knowledge from logits\n\n1. Distilling the knowledge in a neural network. Hinton et al. arXiv:1503.02531\n2. Learning from Noisy Labels with Distillation. Li, Yuncheng et al. ICCV 2017\n3. Training Deep Neural Networks in Generations:A More Tolerant Teacher Educates Better Students. arXiv:1805.05551\n4. Learning Metrics from Teachers: Compact Networks for Image Embedding. Yu, Lu et al. CVPR 2019\n5. Relational Knowledge Distillation. Park, Wonpyo et al. CVPR 2019\n6. On Knowledge Distillation from Complex Networks for Response Prediction. Arora, Siddhartha et al. NAACL 2019\n7. On the Efficacy of Knowledge Distillation. Cho, Jang Hyun & Hariharan, Bharath. arXiv:1910.01348. ICCV 2019\n8. Revisit Knowledge Distillation: a Teacher-free Framework (Revisiting Knowledge Distillation via Label Smoothing Regularization). Yuan, Li et al. CVPR 2020 [[code]][1.10]\n9. Improved Knowledge Distillation via Teacher Assistant: Bridging the Gap Between Student and Teacher. Mirzadeh et al. arXiv:1902.03393\n10. Ensemble Distribution Distillation. ICLR 2020\n11. Noisy Collaboration in Knowledge Distillation. ICLR 2020\n12. On Compressing U-net Using Knowledge Distillation. arXiv:1812.00249\n13. Self-training with Noisy Student improves ImageNet classification. Xie, Qizhe et al.(Google) CVPR 2020\n14. Variational Student: Learning Compact and Sparser Networks in Knowledge Distillation Framework. AAAI 2020\n15. Preparing Lessons: Improve Knowledge Distillation with Better Supervision. arXiv:1911.07471\n16. Adaptive Regularization of Labels. arXiv:1908.05474\n17. Positive-Unlabeled Compression on the Cloud. Xu, Yixing et al. (HUAWEI) NeurIPS 2019\n18. Snapshot Distillation: Teacher-Student Optimization in One Generation. Yang, Chenglin et al. CVPR 2019\n19. QUEST: Quantized embedding space for transferring knowledge. Jain, Himalaya et al. arXiv:2020\n20. Conditional teacher-student learning. Z. Meng et al. ICASSP 2019\n21. Subclass Distillation. Müller, Rafael et al. arXiv:2002.03936\n22. MarginDistillation: distillation for margin-based softmax. Svitov, David & Alyamkin, Sergey. arXiv:2003.02586\n23. An Embarrassingly Simple Approach for Knowledge Distillation. Gao, Mengya et al. MLR 2018\n24. Sequence-Level Knowledge Distillation. Kim, Yoon & Rush, Alexander M. arXiv:1606.07947\n25. Boosting Self-Supervised Learning via Knowledge Transfer. Noroozi, Mehdi et al. CVPR 2018\n26. Meta Pseudo Labels. Pham, Hieu et al. ICML 2020 [[code]][1.26]\n27. Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model. CVPR 2020 [[code]][1.30]\n28. Distilled Binary Neural Network for Monaural Speech Separation. Chen Xiuyi et al. IJCNN 2018\n29. Teacher-Class Network: A Neural Network Compression Mechanism. Malik et al. arXiv:2004.03281\n30. Deeply-supervised knowledge synergy. Sun, Dawei et al. CVPR 2019\n31. What it Thinks is Important is Important: Robustness Transfers through Input Gradients. Chan, Alvin et al. CVPR 2020\n32. Triplet Loss for Knowledge Distillation. Oki, Hideki et al. IJCNN 2020\n33. Role-Wise Data Augmentation for Knowledge Distillation. ICLR 2020 [[code]][1.36]\n34. Distilling Spikes: Knowledge Distillation in Spiking Neural Networks. arXiv:2005.00288\n35. Improved Noisy Student Training for Automatic Speech Recognition. Park et al. arXiv:2005.09629\n36. Learning from a Lightweight Teacher for Efficient Knowledge Distillation. Yuang Liu et al. arXiv:2005.09163\n37. ResKD: Residual-Guided Knowledge Distillation. Li, Xuewei et al. arXiv:2006.04719\n38. Distilling Effective Supervision from Severe Label Noise. Zhang, Zizhao. et al. CVPR 2020 [[code]][1.41]\n39. Knowledge Distillation Meets Self-Supervision. Xu, Guodong et al. ECCV 2020 [[code]][1.42]\n40. Self-supervised Knowledge Distillation for Few-shot Learning. arXiv:2006.09785 [[code]][1.43]\n41. Learning with Noisy Class Labels for Instance Segmentation. ECCV 2020\n42. Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation. Wang, Liwei et al. arXiv:2007.01951\n43. Deep Streaming Label Learning. Wang, Zhen et al. ICML 2020 [[code]][1.46]\n44. Teaching with Limited Information on the Learner's Behaviour. Zhang, Yonggang et al. ICML 2020\n45. Discriminability Distillation in Group Representation Learning. Zhang, Manyuan et al. ECCV 2020\n46. Local Correlation Consistency for Knowledge Distillation. ECCV 2020\n47. Prime-Aware Adaptive Distillation. Zhang, Youcai et al. ECCV 2020\n48. One Size Doesn't Fit All: Adaptive Label Smoothing. Krothapalli et al. arXiv:2009.06432\n49. Learning to learn from noisy labeled data. Li, Junnan et al. CVPR 2019\n50. Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization. Wei, Hongxin et al. CVPR 2020\n51. Online Knowledge Distillation via Multi-branch Diversity Enhancement. Li, Zheng et al. ACCV 2020\n52. Pea-KD: Parameter-efficient and Accurate Knowledge Distillation. arXiv:2009.14822\n53. Extending Label Smoothing Regularization with Self-Knowledge Distillation. Wang, Jiyue et al. arXiv:2009.05226\n54. Spherical Knowledge Distillation. Guo, Jia et al. arXiv:2010.07485\n55. Soft-Label Dataset Distillation and Text Dataset Distillation. arXiv:1910.02551\n56. Wasserstein Contrastive Representation Distillation. Chen, Liqun et al. cvpr 2021\n57. Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup. Xu, Guodong et al. cvpr 2021 [[code]][1.59]\n58. Knowledge Refinery: Learning from Decoupled Label. Ding, Qianggang et al. AAAI 2021\n59. Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net. Zhou, Guorui et al. AAAI 2018\n60. Distilling Virtual Examples for Long-tailed Recognition. He, Yin-Yin et al. CVPR 2021\n61. Balanced Knowledge Distillation for Long-tailed Learning. Zhang, Shaoyu et al. arXiv:2014.10510\n62. Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation. Kim, Taehyeon et al. IJCAI 2021 [[code]][1.62]\n63. Not All Knowledge Is Created Equal. Li, Ziyun et al. arXiv:2106.01489\n64. Knowledge distillation: A good teacher is patient and consistent. Beyer et al. arXiv:2106.05237v1\n65. Hierarchical Self-supervised Augmented Knowledge Distillation. Yang et al. IJCAI 2021 [[code]][1.65]\n\n### Knowledge from intermediate layers\n\n1. Fitnets: Hints for thin deep nets. Romero, Adriana et al. arXiv:1412.6550\n2. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. Zagoruyko et al. ICLR 2017\n3. Knowledge Projection for Effective Design of Thinner and Faster Deep Neural Networks. Zhang, Zhi et al. arXiv:1710.09505\n4. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning. Yim, Junho et al. CVPR 2017\n5. Like What You Like: Knowledge Distill via Neuron Selectivity Transfer. Huang, Zehao & Wang, Naiyan. 2017\n6. Paraphrasing complex network: Network compression via factor transfer. Kim, Jangho et al. NeurIPS 2018\n7. Knowledge transfer with jacobian matching. ICML 2018\n8. Self-supervised knowledge distillation using singular value decomposition. Lee, Seung Hyun et al. ECCV 2018\n9. Learning Deep Representations with Probabilistic Knowledge Transfer. Passalis et al. ECCV 2018\n10. Variational Information Distillation for Knowledge Transfer. Ahn, Sungsoo et al. CVPR 2019\n11. Knowledge Distillation via Instance Relationship Graph. Liu, Yufan et al. CVPR 2019\n12. Knowledge Distillation via Route Constrained Optimization. Jin, Xiao et al. ICCV 2019\n13. Similarity-Preserving Knowledge Distillation. Tung, Frederick, and Mori Greg. ICCV 2019\n14. MEAL: Multi-Model Ensemble via Adversarial Learning. Shen,Zhiqiang, He,Zhankui, and Xue Xiangyang. AAAI 2019\n15. A Comprehensive Overhaul of Feature Distillation. Heo, Byeongho et al. ICCV 2019 [[code]][2.15]\n16. Feature-map-level Online Adversarial Knowledge Distillation. ICML 2020\n17. Distilling Object Detectors with Fine-grained Feature Imitation. ICLR 2020\n18. Knowledge Squeezed Adversarial Network Compression. Changyong, Shu et al. AAAI 2020\n19. Stagewise Knowledge Distillation. Kulkarni, Akshay et al. arXiv: 1911.06786\n20. Knowledge Distillation from Internal Representations. AAAI 2020\n21. Knowledge Flow:Improve Upon Your Teachers. ICLR 2019\n22. LIT: Learned Intermediate Representation Training for Model Compression. ICML 2019\n23. Improving the Adversarial Robustness of Transfer Learning via Noisy Feature Distillation. Chin, Ting-wu et al. arXiv:2002.02998\n24. Knapsack Pruning with Inner Distillation. Aflalo, Yonathan et al. arXiv:2002.08258\n25. Residual Knowledge Distillation. Gao, Mengya et al. arXiv:2002.09168\n26. Knowledge distillation via adaptive instance normalization. Yang, Jing et al. arXiv:2003.04289\n27. Bert-of-Theseus: Compressing bert by progressive module replacing. Xu, Canwen et al. arXiv:2002.02925 [[code]][2.27]\n28. Distilling Spikes: Knowledge Distillation in Spiking Neural Networks. arXiv:2005.00727\n29. Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks. Meet et al. arXiv:2005.08110\n30. Feature-map-level Online Adversarial Knowledge Distillation. Chung, Inseop et al. ICML 2020\n31. Channel Distillation: Channel-Wise Attention for Knowledge Distillation. Zhou, Zaida et al. arXiv:2006.01683 [[code]][2.30]\n32. Matching Guided Distillation. ECCV 2020 [[code]][2.31]\n33. Differentiable Feature Aggregation Search for Knowledge Distillation. ECCV 2020\n34. Interactive Knowledge Distillation. Fu, Shipeng et al. arXiv:2007.01476\n35. Feature Normalized Knowledge Distillation for Image Classification. ECCV 2020 [[code]][2.34]\n36. Layer-Level Knowledge Distillation for Deep Neural Networks. Li, Hao Ting et al. Applied Sciences, 2019\n37. Knowledge Distillation with Feature Maps for Image Classification. Chen, Weichun et al. ACCV 2018\n38. Efficient Kernel Transfer in Knowledge Distillation. Qian, Qi et al. arXiv:2009.14416\n39. Collaborative Distillation in the Parameter and Spectrum Domains for Video Action Recognition. arXiv:2009.06902\n40. Kernel Based Progressive Distillation for Adder Neural Networks. Xu, Yixing et al. NeurIPS 2020\n41. Feature Distillation With Guided Adversarial Contrastive Learning. Bai, Tao et al. arXiv:2009.09922\n42. Pay Attention to Features, Transfer Learn Faster CNNs. Wang, Kafeng et al. ICLR 2019\n43. Multi-level Knowledge Distillation. Ding, Fei et al. arXiv:2012.00573\n44. Cross-Layer Distillation with Semantic Calibration. Chen, Defang et al. AAAI 2021 [[code]][2.44]\n45. Harmonized Dense Knowledge Distillation Training for Multi-­Exit Architectures. Wang, Xinglu & Li, Yingming. AAAI 2021\n46. Robust Knowledge Transfer via Hybrid Forward on the Teacher-Student Model. Song, Liangchen et al. AAAI 2021\n47. Show, Attend and Distill: Knowledge Distillation via Attention-­Based Feature Matching. Ji, Mingi et al. AAAI 2021 [[code]][2.47]\n48. MINILMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers. Wang, Wenhui et al. arXiv:2012.15828\n49. ALP-KD: Attention-Based Layer Projection for Knowledge Distillation. Peyman et al. AAAI 2021\n50. In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation. Reyhan et al. arXiv:2103.00053\n51. Fixing the Teacher-Student Knowledge Discrepancy in Distillation. Han, Jiangfan et al. arXiv:2103.16844\n52. Student Network Learning via Evolutionary Knowledge Distillation. Zhang, Kangkai et al. arXiv:2103.13811\n53. Distilling Knowledge via Knowledge Review. Chen, Pengguang et al. CVPR 2021\n54. Knowledge Distillation By Sparse Representation Matching. Tran et al. arXiv:2103.17012\n55. Task-Oriented Feature Distillation. Zhang et al. NeurIPS 2020 [[code]][2.55]\n56. Adversarial Knowledge Transfer from Unlabeled Data. Gupta et al. ACM-MM 2020 [code](https:\u002F\u002Fgithub.com\u002Fagupt013\u002Fakt)\n57. Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability. He et al. CVPR 2020\n58. PDF-Distil: Including Prediction Disagreements in Feature-based Knowledge Distillation for Object Detection. Zhang et al. BMVC 2021 [code](https:\u002F\u002Fgithub.com\u002FZHANGHeng19931123\u002FMutualGuide)\n\n### Graph-based\n\n1. Graph-based Knowledge Distillation by Multi-head Attention Network. Lee, Seunghyun and Song, Byung. Cheol arXiv:1907.02226\n2. Graph Representation Learning via Multi-task Knowledge Distillation. arXiv:1911.05700\n3. Deep geometric knowledge distillation with graphs. arXiv:1911.03080\n4. Better and faster: Knowledge transfer from multiple self-supervised learning tasks via graph distillation for video classification. IJCAI 2018\n5. Distillating Knowledge from Graph Convolutional Networks. Yang, Yiding et al. CVPR 2020 [[code]][2.46]\n6. Saliency Prediction with External Knowledge. Zhang, Yifeng et al. arXiv:2007.13839\n7. Multi-label Zero-shot Classification by Learning to Transfer from External Knowledge. Huang, He et al. arXiv:2007.15610\n8. Reliable Data Distillation on Graph Convolutional Network. Zhang, Wentao et al. ACM SIGMOD 2020\n9. Mutual Teaching for Graph Convolutional Networks. Zhan, Kun et al. Future Generation Computer Systems, 2021\n10. DistilE: Distiling Knowledge Graph Embeddings for Faster and Cheaper Reasoning. Zhu, Yushan et al. arXiv:2009.05912\n11. Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation. Antaris, Stefanos & Rafailidis, Dimitrios. arXiv:2011.05664\n12. On Self-Distilling Graph Neural Network. Chen, Yuzhao et al. arXiv:2011.02255\n13. Iterative Graph Self Distillation. iclr 2021\n14. Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework. Yang, Cheng et al. WWW 2021 [[code]][2.45]\n15. Graph Distillation for Action Detection with Privileged Information in RGB-D Videos. Luo, Zelun et al. ECCV 2018\n16. Graph Consistency based Mean-Teaching for Unsupervised Domain Adaptive Person Re-Identification. Liu, Xiaobin & Zhang, Shiliang. IJCAI 2021\n\n### Mutual Information & Online Learning\n\n1. Correlation Congruence for Knowledge Distillation. Peng, Baoyun et al. ICCV 2019\n2. Similarity-Preserving Knowledge Distillation. Tung, Frederick, and Mori Greg. ICCV 2019\n3. Variational Information Distillation for Knowledge Transfer. Ahn, Sungsoo et al. CVPR 2019\n4. Contrastive Representation Distillation. Tian, Yonglong et al. ICLR 2020 [[RepDistill]][4.4]\n5. Online Knowledge Distillation via Collaborative Learning. Guo, Qiushan et al. CVPR 2020\n6. Peer Collaborative Learning for Online Knowledge Distillation. Wu, Guile & Gong, Shaogang. AAAI 2021\n7. Knowledge Transfer via Dense Cross-layer Mutual-distillation. ECCV 2020\n8. MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution. Yang, Taojiannan et al. ECCV 2020 [[code]][4.9]\n9. AMLN: Adversarial-based Mutual Learning Network for Online Knowledge Distillation. ECCV 2020\n10. Towards Cross-modality Medical Image Segmentation with Online Mutual Knowledge. Li, Kang et al. AAAI 2021\n11. Federated Knowledge Distillation. Seo, Hyowoon et al. arXiv:2011.02367\n12. Unsupervised Image Segmentation using Mutual Mean-Teaching. Wu, Zhichao et al.arXiv:2012.08922\n13. Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning. Cai, Zhaowei et al. arXiv:2101.08482\n14. Robust Mutual Learning for Semi-supervised Semantic Segmentation. Zhang, Pan et al. arXiv:2106.00609\n15. Mutual Contrastive Learning for Visual Representation Learning. Yang et al. AAAI 2022 [[code]][4.15]\n16. Information Theoretic Representation Distillation. Miles et al. BMVC 2022 [[code]][19.10]\n\n### Self-KD\n\n1. Moonshine: Distilling with Cheap Convolutions. Crowley, Elliot J. et al. NeurIPS 2018 \n2. Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. Zhang, Linfeng et al. ICCV 2019\n3. Learning Lightweight Lane Detection CNNs by Self Attention Distillation. Hou, Yuenan et al. ICCV 2019\n4. BAM! Born-Again Multi-Task Networks for Natural Language Understanding. Clark, Kevin et al. ACL 2019,short\n5. Self-Knowledge Distillation in Natural Language Processing. Hahn, Sangchul and Choi, Heeyoul. arXiv:1908.01851\n6. Rethinking Data Augmentation: Self-Supervision and Self-Distillation. Lee, Hankook et al. ICLR 2020\n7. MSD: Multi-Self-Distillation Learning via Multi-classifiers within Deep Neural Networks. arXiv:1911.09418\n8. Self-Distillation Amplifies Regularization in Hilbert Space. Mobahi, Hossein et al. NeurIPS 2020\n9. MINILM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers. Wang, Wenhui et al. arXiv:2002.10957\n10. Regularizing Class-wise Predictions via Self-knowledge Distillation. CVPR 2020 [[code]][5.11]\n11. Self-Distillation as Instance-Specific Label Smoothing. Zhang, Zhilu & Sabuncu, Mert R. NeurIPS 2020\n12. Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training. Chen, Xuxi et al. ICML 2020 [[code]][5.13]\n13. S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning. Karsten et al. ICML 2021\n14. Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection. Huang, Zeyi et al. NeurIPS 2020\n15. Distillation-Based Training for Multi-Exit Architectures. Phuong, Mary and Lampert, Christoph H. ICCV 2019\n16. Pair-based self-distillation for semi-supervised domain adaptation. iclr 2021\n17. SEED: SElf-SupErvised Distillation. ICLR 2021\n18. Self-Feature Regularization: Self-Feature Distillation Without Teacher Models. Fan, Wenxuan & Hou, Zhenyan.arXiv:2103.07350\n19. Refine Myself by Teaching Myself: Feature Refinement via Self-Knowledge Distillation. Ji, Mingi et al. CVPR 2021 [[code]][5.19]\n20. SE-SSD: Self-Ensembling Single-Stage Object Detector From Point Cloud. Zheng, Wu et al. CVPR 2021 [[code]][5.20]\n21. Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification. Ge, Yixiao et al. CVPR 2021\n22. Towards Compact Single Image Super-Resolution via Contrastive Self-distillation. IJCAI 2021\n23. DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.12997.pdf)\n24. Knowledge Distillation with the Reused Teacher Classifier [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.14001.pdf)\n25. Self-Distillation from the Last Mini-Batch for Consistency Regularizatio [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.16172.pdf)\n26. Decoupled Knowledge Distillation [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.08679.pdf)\n\n\n### Structural Knowledge\n\n1. Paraphrasing Complex Network:Network Compression via Factor Transfer. Kim, Jangho et al. NeurIPS 2018\n2. Relational Knowledge Distillation.  Park, Wonpyo et al. CVPR 2019\n3. Knowledge Distillation via Instance Relationship Graph. Liu, Yufan et al. CVPR 2019\n4. Contrastive Representation Distillation. Tian, Yonglong et al. ICLR 2020\n5. Teaching To Teach By Structured Dark Knowledge. ICLR 2020\n6. Inter-Region Affinity Distillation for Road Marking Segmentation. Hou, Yuenan et al. CVPR 2020 [[code]][6.6]\n7. Heterogeneous Knowledge Distillation using Information Flow Modeling. Passalis et al. CVPR 2020 [[code]][6.7]\n8. Asymmetric metric learning for knowledge transfer. Budnik, Mateusz & Avrithis, Yannis. arXiv:2006.16331\n9. Local Correlation Consistency for Knowledge Distillation. ECCV 2020\n10. Few-Shot Class-Incremental Learning. Tao, Xiaoyu et al. CVPR 2020\n11. Semantic Relation Preserving Knowledge Distillation for Image-to-Image Translation. ECCV 2020\n12. Interpretable Foreground Object Search As Knowledge Distillation. ECCV 2020\n13. Improving Knowledge Distillation via Category Structure. ECCV 2020\n14. Few-Shot Class-Incremental Learning via Relation Knowledge Distillation. Dong, Songlin et al. AAAI 2021\n15. Complementary Relation Contrastive Distillation. Zhu, Jinguo et al. CVPR 2021\n16. Information Theoretic Representation Distillation. Miles et al. BMVC 2022 [[code]][19.10]\n\n### Privileged Information\n\n1. Learning using privileged information: similarity control and knowledge transfer. Vapnik, Vladimir and Rauf, Izmailov. MLR 2015  \n2. Unifying distillation and privileged information. Lopez-Paz, David et al. ICLR 2016\n3. Model compression via distillation and quantization. Polino, Antonio et al. ICLR 2018\n4. KDGAN:Knowledge Distillation with Generative Adversarial Networks. Wang, Xiaojie. NeurIPS 2018\n5. Efficient Video Classification Using Fewer Frames. Bhardwaj, Shweta et al. CVPR 2019\n6. Retaining privileged information for multi-task learning. Tang, Fengyi et al. KDD 2019\n7. A Generalized Meta-loss function for regression and classification using privileged information. Asif, Amina et al. arXiv:1811.06885\n8. Private Knowledge Transfer via Model Distillation with Generative Adversarial Networks. Gao, Di & Zhuo, Cheng. AAAI 2020\n9. Privileged Knowledge Distillation for Online Action Detection. Zhao, Peisen et al. cvpr 2021\n10. Adversarial Distillation for Learning with Privileged Provisions. Wang, Xiaojie et al. TPAMI 2019\n\n## KD + GAN\n\n1. Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks. Xu, Zheng et al. arXiv:1709.00513\n2. KTAN: Knowledge Transfer Adversarial Network. Liu, Peiye et al. arXiv:1810.08126\n3. KDGAN:Knowledge Distillation with Generative Adversarial Networks. Wang, Xiaojie. NeurIPS 2018\n4. Adversarial Learning of Portable Student Networks. Wang, Yunhe et al. AAAI 2018\n5. Adversarial Network Compression. Belagiannis et al. ECCV 2018\n6. Cross-Modality Distillation: A case for Conditional Generative Adversarial Networks. ICASSP 2018\n7. Adversarial Distillation for Efficient Recommendation with External Knowledge. TOIS 2018\n8. Training student networks for acceleration with conditional adversarial networks. Xu, Zheng et al. BMVC 2018\n9. DAFL:Data-Free Learning of Student Networks. Chen, Hanting et al. ICCV 2019\n10. MEAL: Multi-Model Ensemble via Adversarial Learning. Shen, Zhiqiang et al. AAAI 2019\n11. Knowledge Distillation with Adversarial Samples Supporting Decision Boundary. Heo, Byeongho et al. AAAI 2019\n12. Exploiting the Ground-Truth: An Adversarial Imitation Based Knowledge Distillation Approach for Event Detection. Liu, Jian et al. AAAI 2019\n13. Adversarially Robust Distillation. Goldblum, Micah et al. AAAI 2020\n14. GAN-Knowledge Distillation for one-stage Object Detection. Hong, Wei et al. arXiv:1906.08467\n15. Lifelong GAN: Continual Learning for Conditional Image Generation. Kundu et al. arXiv:1908.03884\n16. Compressing GANs using Knowledge Distillation. Aguinaldo, Angeline et al. arXiv:1902.00159\n17. Feature-map-level Online Adversarial Knowledge Distillation. ICML 2020\n18. MineGAN: effective knowledge transfer from GANs to target domains with few images. Wang, Yaxing et al. CVPR 2020\n19. Distilling portable Generative Adversarial Networks for Image Translation. Chen, Hanting et al. AAAI 2020\n20. GAN Compression: Efficient Architectures for Interactive Conditional GANs. Junyan Zhu et al. CVPR 2020 [[code]][8.20]\n21. Adversarial network compression. Belagiannis et al. ECCV 2018\n22. P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection. Zhang, Zhiwei et al. IJCAI 2020\n23. StyleGAN2 Distillation for Feed-forward Image Manipulation. Viazovetskyi et al. ECCV 2020 [[code]][8.23]\n24. HardGAN: A Haze-Aware Representation Distillation GAN for Single Image Dehazing. ECCV 2020\n25. TinyGAN: Distilling BigGAN for Conditional Image Generation. ACCV 2020 [[code]][8.25]\n26. Learning Efficient GANs via Differentiable Masks and co-Attention Distillation. Li, Shaojie et al. arXiv:2011.08382 [[code]][8.26]\n27. Self-Supervised GAN Compression. Yu, Chong & Pool, Jeff. arXiv:2007.01491\n28. Teachers Do More Than Teach: Compressing Image-to-Image Models. CVPR 2021 [[code]][8.29]\n29. Distilling and Transferring Knowledge via cGAN-generated Samples for Image Classification and Regression. Ding, Xin et al. arXiv:2104.03164\n30. Content-Aware GAN Compression. Liu, Yuchen et al. CVPR 2021\n\n## KD + Meta-learning\n\n1. Few Sample Knowledge Distillation for Efficient Network Compression. Li, Tianhong et al. CVPR 2020\n2. Learning What and Where to Transfer. Jang, Yunhun et al, ICML 2019\n3. Transferring Knowledge across Learning Processes. Moreno, Pablo G et al. ICLR 2019\n4. Semantic-Aware Knowledge Preservation for Zero-Shot Sketch-Based Image Retrieval. Liu, Qing et al. ICCV 2019\n5. Diversity with Cooperation: Ensemble Methods for Few-Shot Classification. Dvornik, Nikita et al. ICCV 2019\n6. Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation. arXiv:1911.05329v1\n7. Progressive Knowledge Distillation For Generative Modeling. ICLR 2020\n8. Few Shot Network Compression via Cross Distillation. AAAI 2020\n9. MetaDistiller: Network Self-boosting via Meta-learned Top-down Distillation. Liu, Benlin et al. ECCV 2020\n10. Few-Shot Learning with Intra-Class Knowledge Transfer. arXiv:2008.09892\n11. Few-Shot Object Detection via Knowledge Transfer. Kim, Geonuk et al. arXiv:2008.12496\n12. Distilled One-Shot Federated Learning. arXiv:2009.07999\n13. Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains. Pan, Haojie et al. arXiv:2012.01266\n14. Progressive Network Grafting for Few-Shot Knowledge Distillation. Shen, Chengchao et al. AAAI 2021\n\n## Data-free KD\n\n1. Data-Free Knowledge Distillation for Deep Neural Networks. NeurIPS 2017\n2. Zero-Shot Knowledge Distillation in Deep Networks. ICML 2019\n3. DAFL:Data-Free Learning of Student Networks. ICCV 2019\n4. Zero-shot Knowledge Transfer via Adversarial Belief Matching. Micaelli, Paul and Storkey, Amos. NeurIPS 2019\n5. Dream Distillation: A Data-Independent Model Compression Framework. Kartikeya et al. ICML 2019\n6. Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion. Yin, Hongxu et al. CVPR 2020 [[code]][10.6]\n7. Data-Free Adversarial Distillation. Fang, Gongfan et al. CVPR 2020\n8. The Knowledge Within: Methods for Data-Free Model Compression. Haroush, Matan et al. CVPR 2020\n9. Knowledge Extraction with No Observable Data. Yoo, Jaemin et al. NeurIPS 2019 [[code]][10.9]\n10. Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN. CVPR 2020\n11. DeGAN: Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier. Addepalli, Sravanti et al. arXiv:1912.11960\n12. Generative Low-bitwidth Data Free Quantization. Xu, Shoukai et al. ECCV 2020 [[code]][10.12]\n13. This dataset does not exist: training models from generated images. arXiv:1911.02888\n14. MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation. Sanjay et al. arXiv:2005.03161\n15. Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data. Such et al. ECCV 2020\n16. Billion-scale semi-supervised learning for image classification. FAIR. arXiv:1905.00546 [[code]][10.16]\n17. Data-Free Network Quantization with Adversarial Knowledge Distillation. Choi, Yoojin et al. CVPRW 2020\n18. Adversarial Self-Supervised Data-Free Distillation for Text Classification. EMNLP 2020\n19. Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer. arXiv:2010.07334\n20. Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN. Bhogale et al. arXiv:2011.00809\n21. Layer-Wise Data-Free CNN Compression. Horton, Maxwell et al (Apple Inc.). cvpr 2021\n22. Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation. Nayak et al. WACV 2021\n23. Learning in School: Multi-teacher Knowledge Inversion for Data-Free Quantization. Li, Yuhang et al. cvpr 2021\n24. Large-Scale Generative Data-Free Distillation. Luo, Liangchen et al. cvpr 2021\n25. Domain Impression: A Source Data Free Domain Adaptation Method. Kurmi et al. WACV 2021\n26. Learning Student Networks in the Wild. (HUAWEI-Noah). CVPR 2021\n27. Data-Free Knowledge Distillation For Image Super-Resolution. (HUAWEI-Noah). CVPR 2021\n28. Zero-shot Adversarial Quantization. Liu, Yuang et al. CVPR 2021 [[code]][10.28]\n29. Source-Free Domain Adaptation for Semantic Segmentation. Liu, Yuang et al. CVPR 2021\n30. Data-Free Model Extraction. Jean-Baptiste et al. CVPR 2021 [[code]][10.30]\n31. Delving into Data: Effectively Substitute Training for Black-box Attack. CVPR 2021\n32. Zero-Shot Knowledge Distillation Using Label-Free Adversarial Perturbation With Taylor Approximation. Li, Kang et al. IEEE Access, 2021. \n33. Half-Real Half-Fake Distillation for Class-Incremental Semantic Segmentation. Huang, Zilong et al. arXiv:2104.00875\n34. Dual Discriminator Adversarial Distillation for Data-free Model Compression. Zhao, Haoran et al. TCSVT 2021\n35. See through Gradients: Image Batch Recovery via GradInversion. Yin, Hongxu et al. CVPR 2021\n36. Contrastive Model Inversion for Data-Free Knowledge Distillation. Fang, Gongfan et al. IJCAI 2021 [[code]][10.36]\n37. Graph-Free Knowledge Distillation for Graph Neural Networks. Deng, Xiang & Zhang, Zhongfei. arXiv:2105.07519\n38. Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Mode. Wang Zi. ICML 2021\n39. Data-Free Knowledge Distillation for Heterogeneous Federated Learning. Zhu, Zhuangdi et al. ICML 2021\n\n\nother data-free model compression:\n\n- Data-free Parameter Pruning for Deep Neural Networks. Srinivas, Suraj et al. arXiv:1507.06149\n- Data-Free Quantization Through Weight Equalization and Bias Correction. Nagel, Markus et al. ICCV 2019\n- DAC: Data-free Automatic Acceleration of Convolutional Networks. Li, Xin et al. WACV 2019\n- A Privacy-Preserving DNN Pruning and Mobile Acceleration Framework. Zhan, Zheng et al. arXiv:2003.06513\n- ZeroQ: A Novel Zero Shot Quantization Framework. Cai et al. CVPR 2020 [[code]][10.35]\n- Diversifying Sample Generation for Data-Free Quantization. Zhang, Xiangguo et al. CVPR 2021\n\n## KD + AutoML\n\n1. Improving Neural Architecture Search Image Classifiers via Ensemble Learning. Macko, Vladimir et al. arXiv:1903.06236\n2. Blockwisely Supervised Neural Architecture Search with Knowledge Distillation. Li, Changlin et al. CVPR 2020\n3. Towards Oracle Knowledge Distillation with Neural Architecture Search. Kang, Minsoo et al. AAAI 2020\n4. Search for Better Students to Learn Distilled Knowledge. Gu, Jindong & Tresp, Volker arXiv:2001.11612\n5. Circumventing Outliers of AutoAugment with Knowledge Distillation. Wei, Longhui et al. arXiv:2003.11342\n6. Network Pruning via Transformable Architecture Search. Dong, Xuanyi & Yang, Yi. NeurIPS 2019\n7. Search to Distill: Pearls are Everywhere but not the Eyes. Liu Yu et al. CVPR 2020\n8. AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks. Fu, Yonggan et al. ICML 2020 [[code]][11.8]\n9. Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation. CVPR 2021\n\n## KD + RL\n\n1. N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning. Ashok, Anubhav et al. ICLR 2018\n2. Knowledge Flow:Improve Upon Your Teachers. Liu, Iou-jen et al. ICLR 2019\n3. Transferring Knowledge across Learning Processes. Moreno, Pablo G et al. ICLR 2019\n4. Exploration by random network distillation. Burda, Yuri et al. ICLR 2019\n5. Periodic Intra-Ensemble Knowledge Distillation for Reinforcement Learning. Hong, Zhang-Wei et al. arXiv:2002.00149\n6. Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model Distillation Approach. Xue, Zeyue et al. arXiv:2002.02202\n7. Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Learning. Cha, han et al. arXiv:2005.06105\n8. Dual Policy Distillation. Lai, Kwei-Herng et al. IJCAI 2020\n9. Student-Teacher Curriculum Learning via Reinforcement Learning: Predicting Hospital Inpatient Admission Location. El-Bouri, Rasheed et al. ICML 2020\n10. Reinforced Multi-Teacher Selection for Knowledge Distillation. Yuan, Fei et al. AAAI 2021\n11. Universal Trading for Order Execution with Oracle Policy Distillation. Fang, Yuchen et al. AAAI 2021\n12. Weakly-Supervised Domain Adaptation of Deep Regression Trackers via Reinforced Knowledge Distillation. Dunnhofer et al. IEEE RAL\n\n## KD + Self-supervised\n\n1. Reversing the cycle: self-supervised deep stereo through enhanced monocular distillation. ECCV 2020\n2. Self-supervised Label Augmentation via Input Transformations. Lee, Hankook et al. ICML 2020 [[code]][12.2]\n3. Improving Object Detection with Selective Self-supervised Self-training. Li, Yandong et al. ECCV 2020\n4. Distilling Visual Priors from Self-Supervised Learning. Zhao, Bingchen & Wen, Xin. ECCVW 2020\n5. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. Grill et al. arXiv:2006.07733 [[code]][12.5]\n6. Unpaired Learning of Deep Image Denoising. Wu, Xiaohe et al. arXiv:2008.13711 [[code]][12.6]\n7. SSKD: Self-Supervised Knowledge Distillation for Cross Domain Adaptive Person Re-Identification. Yin, Junhui et al. arXiv:2009.05972\n8. Introspective Learning by Distilling Knowledge from Online Self-explanation. Gu, Jindong et al. ACCV 2020\n9. Robust Pre-Training by Adversarial Contrastive Learning. Jiang, Ziyu et al. NeurIPS 2020 [[code]][12.9]\n10. CompRess: Self-Supervised Learning by Compressing Representations. Koohpayegani et al. NeurIPS 2020 [[code]][12.10]\n11. Big Self-Supervised Models are Strong Semi-Supervised Learners. Che, Ting et al. NeurIPS 2020 [[code]][12.11]\n12. Rethinking Pre-training and Self-training. Zoph, Barret et al. NeurIPS 2020 [[code]][12.12]\n13. ISD: Self-Supervised Learning by Iterative Similarity Distillation. Tejankar et al. cvpr 2021 [[code]][12.13]\n14. Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning. Li, Zeming et al. arXiv:2101.07525\n15. Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones. Cui, Cheng et al. arXiv:2103.05959\n16. Distilling Audio-Visual Knowledge by Compositional Contrastive Learning. Chen, Yanbei et al. CVPR 2021\n17. DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning. Gao, Yuting et al. arXiv:2104.09124\n18. Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image Segmentation. Xiang, Jinxi et al. arXiv:2105.12924\n19. Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision. Chen, Xiaokang et al. CPVR 2021\n20. Adversarial Knowledge Transfer from Unlabeled Data. Gupta et al. ACM-MM 2020 [code](https:\u002F\u002Fgithub.com\u002Fagupt013\u002Fakt)\n\n## Multi-teacher and Ensemble KD \n\n1. Learning from Multiple Teacher Networks. You, Shan et al. KDD 2017\n2. Learning with single-teacher multi-student. You, Shan et al. AAAI 2018\n3. Knowledge distillation by on-the-fly native ensemble. Lan, Xu et al. NeurIPS 2018\n4. Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data. ICLR 2017\n5. Knowledge Adaptation: Teaching to Adapt. Arxiv:1702.02052\n6. Deep Model Compression: Distilling Knowledge from Noisy Teachers.  Sau, Bharat Bhusan et al. arXiv:1610.09650\n7. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Tarvainen, Antti and Valpola, Harri. NeurIPS 2017\n8. Born-Again Neural Networks. Furlanello, Tommaso et al. ICML 2018\n9. Deep Mutual Learning. Zhang, Ying et al. CVPR 2018\n10. Collaborative learning for deep neural networks. Song, Guocong and Chai, Wei. NeurIPS 2018\n11. Data Distillation: Towards Omni-Supervised Learning. Radosavovic, Ilija et al. CVPR 2018\n12. Multilingual Neural Machine Translation with Knowledge Distillation. ICLR 2019\n13. Unifying Heterogeneous Classifiers with Distillation. Vongkulbhisal et al. CVPR 2019\n14. Distilled Person Re-Identification: Towards a More Scalable System. Wu, Ancong et al. CVPR 2019\n15. Diversity with Cooperation: Ensemble Methods for Few-Shot Classification. Dvornik, Nikita et al. ICCV 2019\n16. Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System. Yang, Ze et al. WSDM 2020 \n17. FEED: Feature-level Ensemble for Knowledge Distillation. Park, SeongUk and Kwak, Nojun. AAAI 2020\n18. Stochasticity and Skip Connection Improve Knowledge Transfer. Lee, Kwangjin et al. ICLR 2020\n19. Online Knowledge Distillation with Diverse Peers. Chen, Defang et al. AAAI 2020\n20. Hydra: Preserving Ensemble Diversity for Model Distillation. Tran, Linh et al. arXiv:2001.04694\n21. Distilled Hierarchical Neural Ensembles with Adaptive Inference Cost. Ruiz, Adria et al. arXv:2003.01474\n22. Distilling Knowledge from Ensembles of Acoustic Models for Joint CTC-Attention End-to-End Speech Recognition. Gao, Yan et al. arXiv:2005.09310\n23. Large-Scale Few-Shot Learning via Multi-Modal Knowledge Discovery. ECCV 2020\n24. Collaborative Learning for Faster StyleGAN Embedding. Guan, Shanyan et al. arXiv:2007.01758\n25. Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection. Chen, Cong et al. IEEE 2020 [[code]][12.25]\n26. Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for Annotation-efficient Cardiac Segmentation. MICCAI 2020\n27. Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation. Nguyen-Meidine et al. WACV 2020\n28. Semi-supervised Learning with Teacher-student Network for Generalized Attribute Prediction. Shin, Minchul et al. ECCV 2020\n29. Knowledge Distillation for Multi-task Learning. Li, WeiHong & Bilen, Hakan. arXiv:2007.06889 [[project]][12.29]\n30. Adaptive Multi-Teacher Multi-level Knowledge Distillation. Liu, Yuang et al. Neurocomputing 2020 [[code]][12.30]\n31. Online Ensemble Model Compression using Knowledge Distillation. ECCV 2020\n32. Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification. ECCV 2020\n33. Group Knowledge Transfer: Collaborative Training of Large CNNs on the Edge. He, Chaoyang et al. arXiv:2007.14513\n34. Densely Guided Knowledge Distillation using Multiple Teacher Assistants. Son, Wonchul et l. arXiv:2009.08825\n35. ProxylessKD: Direct Knowledge Distillation with Inherited Classifier for Face Recognition. Shi, Weidong et al. arXiv:2011.00265\n36. Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space. Du, Shangchen et al. NeurIPS 2020 [[code]][12.37]\n37. Reinforced Multi‐Teacher Selection for Knowledge Distillation. Yuan, Fei et al. AAAI 2021\n38. Class-­Incremental Instance Segmentation via Multi­‐Teacher Networks. Gu, Yanan et al. AAAI 2021\n39. Collaborative Teacher-Student Learning via Multiple Knowledge Transfer. Sun, Liyuan et al. arXiv:2101.08471\n40. Efficient Conditional GAN Transfer with Knowledge Propagation across Classes. Shahbaziet al. CVPR 2021 [[code]][8.28]\n41. Knowledge Evolution in Neural Networks. Taha, Ahmed et al. CVPR 2021 [[code]][12.41]\n42. Distilling a Powerful Student Model via Online Knowledge Distillation. Li, Shaojie et al. arXiv:2103.14473\n\n### Knowledge Amalgamation（KA) - zju-VIPA\n\n[VIPA - KA][13.24]\n\n1. Amalgamating Knowledge towards Comprehensive Classification. Shen, Chengchao et al. AAAI 2019\n2. Amalgamating Filtered Knowledge : Learning Task-customized Student from Multi-task Teachers. Ye, Jingwen et al. IJCAI 2019\n3. Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning. Luo, Sihui et al. IJCAI 2019\n4. Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More. Ye, Jingwen et al. CVPR 2019\n5. Customizing Student Networks From Heterogeneous Teachers via Adaptive Knowledge Amalgamation. ICCV 2019\n6. Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN. CVPR 2020\n\n## Cross-modal \u002F DA \u002F Incremental Learning\n\n1. SoundNet: Learning Sound Representations from Unlabeled Video SoundNet Architecture. Aytar, Yusuf et al. NeurIPS 2016\n2. Cross Modal Distillation for Supervision Transfer. Gupta, Saurabh et al. CVPR 2016\n3. Emotion recognition in speech using cross-modal transfer in the wild. Albanie, Samuel et al. ACM MM 2018\n4. Through-Wall Human Pose Estimation Using Radio Signals. Zhao, Mingmin et al. CVPR 2018\n5. Compact Trilinear Interaction for Visual Question Answering. Do, Tuong et al. ICCV 2019\n6. Cross-Modal Knowledge Distillation for Action Recognition. Thoker, Fida Mohammad and Gall, Juerge. ICIP 2019\n7. Learning to Map Nearly Anything. Salem, Tawfiq et al. arXiv:1909.06928\n8. Semantic-Aware Knowledge Preservation for Zero-Shot Sketch-Based Image Retrieval. Liu, Qing et al. ICCV 2019\n9. UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation. Kundu et al. ICCV 2019\n10. CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency. Chen, Yun-Chun et al. CVPR 2019\n11. XD:Cross lingual Knowledge Distillation for Polyglot Sentence Embeddings. ICLR 2020\n12. Effective Domain Knowledge Transfer with Soft Fine-tuning. Zhao, Zhichen et al. arXiv:1909.02236\n13. ASR is all you need: cross-modal distillation for lip reading. Afouras et al. arXiv:1911.12747v1\n14. Knowledge distillation for semi-supervised domain adaptation. arXiv:1908.07355\n15. Domain Adaptation via Teacher-Student Learning for End-to-End Speech Recognition. Meng, Zhong et al. arXiv:2001.01798\n16. Cluster Alignment with a Teacher for Unsupervised Domain Adaptation. ICCV 2019\n17. Attention Bridging Network for Knowledge Transfer. Li, Kunpeng et al. ICCV 2019\n18. Unpaired Multi-modal Segmentation via Knowledge Distillation. Dou, Qi et al. arXiv:2001.03111\n19. Multi-source Distilling Domain Adaptation. Zhao, Sicheng et al. arXiv:1911.11554\n20. Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing. Hu, Hengtong et al. CVPR 2020\n21. Improving Semantic Segmentation via Self-Training. Zhu, Yi et al. arXiv:2004.14960\n22. Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation. arXiv:2005.08213\n23. Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation. arXiv:2005.07839\n24. Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets without Superior Knowledge. Zhao, Long et al. CVPR 2020\n25. Large-Scale Domain Adaptation via Teacher-Student Learning. Li, Jinyu et al. arXiv:1708.05466\n26. Large Scale Audiovisual Learning of Sounds with Weakly Labeled Data. Fayek, Haytham M. & Kumar, Anurag. IJCAI 2020\n27. Distilling Cross-Task Knowledge via Relationship Matching. Ye, Han-Jia. et al. CVPR 2020 [[code]][14.27]\n28. Modality distillation with multiple stream networks for action recognition. Garcia, Nuno C. et al. ECCV 2018\n29. Domain Adaptation through Task Distillation. Zhou, Brady et al. ECCV 2020 [[code]][14.29]\n30. Dual Super-Resolution Learning for Semantic Segmentation. Wang, Li et al. CVPR 2020 [[code]][14.30]\n31. Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation. Jing, Taotao et al. ACM MM 2020\n32. Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation. Peng, Xingchao et al. ECCV 2020 [[code]][14.32]\n33. Unsupervised Domain Adaptive Knowledge Distillation for Semantic Segmentation. Kothandaraman et al. arXiv:2011.08007\n34. A Student‐Teacher Architecture for Dialog Domain Adaptation under the Meta‐Learning Setting. Qian, Kun et al. AAAI 2021\n35. Multimodal Fusion via Teacher‐Student Network for Indoor Action Recognition. Bruce et al. AAAI 2021\n36. Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with Reliable Transfer for Cardiac Segmentation. Li, Kang et al. TMI 2021\n37. Knowledge Distillation Methods for Efficient Unsupervised Adaptation Across Multiple Domains. Nguyen et al. IVC 2021\n38. Feature-Supervised Action Modality Transfer. Thoker, Fida Mohammad and Snoek, Cees. ICPR 2020.\n39. There is More than Meets the Eye: Self-Supervised Multi-Object Detection and Tracking with Sound by Distilling Multimodal Knowledge. Francisco et al. CVPR 2021\n40. Adaptive Consistency Regularization for Semi-Supervised Transfer Learning\nAbulikemu. Abulikemu et al. CVPR 2021 [[code]][14.40]\n41. Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning. Cheraghian et al. CVPR 2021\n42. Distilling Causal Effect of Data in Class-Incremental Learning. Hu, Xinting et al. CVPR 2021 [[code]][14.42]\n43. Semi-supervised Domain Adaptation based on Dual-level Domain Mixing for Semantic Segmentation. Chen, Shuaijun et al. CVPR 2021\n44. PLOP: Learning without Forgetting for Continual Semantic Segmentation. Arthur et al. CVPR 2021\n45. Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations. Umberto & Pietro. CVPR 2021\n46. Learning Scene Structure Guidance via Cross-Task Knowledge Transfer for Single Depth Super-Resolution. Sun, Baoli et al. CVPR 2021 [[code]][14.46]\n47. CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning. Wei, Chen et al. CVPR 2021\n48. Adaptive Boosting for Domain Adaptation: Towards Robust Predictions in Scene Segmentation. Zheng, Zhedong & Yang, Yi. CVPR 2021\n49. Image Classification in the Dark Using Quanta Image Sensors. Gnanasambandam, Abhiram & Chan, Stanley H. ECCV 2020\n50. Dynamic Low-Light Imaging with Quanta Image Sensors. Chi, Yiheng et al. ECCV 2020\n51. Visualizing Adapted Knowledge in Domain Transfer. Hou, Yunzhong & Zheng, Liang. CVPR 2021\n52. Neutral Cross-Entropy Loss Based Unsupervised Domain Adaptation for Semantic Segmentation. Xu, Hanqing et al. IEEE TIP 2021\n53. Zero-Shot Detection via Vision and Language Knowledge Distillation. Gu, Xiuye et al. arXiv:2104.13921\n54. Rethinking Ensemble-Distillation for Semantic Segmentation Based Unsupervised Domain Adaptation. Chao, Chen-Hao et al. CVPRW 2021\n55. Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer. Wu, Zhiyuan et al. arXiv: 2104.14696\n56. A Fourier-based Framework for Domain Generalization. Xu, Qinwei et al. CVPR 2021\n57. KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation. Feng, Haozhe et al. ICML 2021\n\n\n## Application of KD\n\n1. Face model compression by distilling knowledge from neurons. Luo, Ping et al. AAAI 2016\n2. Learning efficient object detection models with knowledge distillation. Chen, Guobin et al. NeurIPS 2017\n3. Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy. Mishra, Asit et al. NeurIPS 2018\n4. Distilled Person _Re-identification_: Towars a More Scalable System. Wu, Ancong et al. CVPR 2019\n5. Efficient _Video Classification_ Using Fewer Frames. Bhardwaj, Shweta et al. CVPR 2019\n6. Fast Human _Pose Estimation_. Zhang, Feng et al. CVPR 2019\n7. Distilling knowledge from a deep _pose_ regressor network. Saputra et al. arXiv:1908.00858 (2019)\n8. Learning Lightweight _Lane Detection_ CNNs by Self Attention Distillation. Hou, Yuenan et al. ICCV 2019\n9. Structured Knowledge Distillation for _Semantic Segmentation_. Liu, Yifan et al. CVPR 2019\n10. Relation Distillation Networks for _Video Object Detection_. Deng, Jiajun et al. ICCV 2019\n11. Teacher Supervises Students How to Learn From Partially Labeled Images for _Facial Landmark Detection_. Dong, Xuanyi and Yang, Yi. ICCV 2019\n12. Progressive Teacher-student Learning for Early _Action Prediction_. Wang, Xionghui et al. CVPR 2019\n13. Lightweight Image _Super-Resolution_ with Information Multi-distillation Network. Hui, Zheng et al. ICCVW 2019\n14. AWSD:Adaptive Weighted Spatiotemporal Distillation for _Video Representation_. Tavakolian, Mohammad et al. ICCV 2019\n15. Dynamic Kernel Distillation for Efficient _Pose Estimation_ in Videos. Nie, Xuecheng et al. ICCV 2019\n16. Teacher Guided _Architecture Search_. Bashivan, Pouya and Tensen, Mark. ICCV 2019\n17. Online Model Distillation for Efficient _Video Inference_. Mullapudi et al. ICCV 2019\n18. Distilling _Object Detectors_ with Fine-grained Feature Imitation. Wang, Tao et al. CVPR 2019\n19. Relation Distillation Networks for _Video Object Detection_. Deng, Jiajun et al. ICCV 2019\n20. Knowledge Distillation for Incremental Learning in _Semantic Segmentation_. arXiv:1911.03462\n21. MOD: A Deep Mixture Model with Online Knowledge Distillation for Large Scale Video Temporal Concept Localization. arXiv:1910.12295\n22. Teacher-Students Knowledge Distillation for _Siamese Trackers_. arXiv:1907.10586\n23. LaTeS: Latent Space Distillation for Teacher-Student _Driving_ Policy Learning. Zhao, Albert et al. CVPR 2020(pre)\n24. Knowledge Distillation for _Brain Tumor Segmentation_. arXiv:2002.03688\n25. ROAD: Reality Oriented Adaptation for _Semantic Segmentation_ of Urban Scenes. Chen, Yuhua et al. CVPR 2018\n26. Multi-Representation Knowledge Distillation For Audio Classification. Gao, Liang et al. arXiv:2002.09607\n27. Collaborative Distillation for Ultra-Resolution Universal _Style Transfer_. Wang, Huan et al. CVPR 2020 [[code]][15.28]\n28. ShadowTutor: Distributed Partial Distillation for Mobile _Video_ DNN Inference. Chung, Jae-Won et al. ICPP 2020 [[code]][15.29]\n29. Object Relational Graph with Teacher-Recommended Learning for _Video Captioning_. Zhang, Ziqi et al. CVPR 2020\n30. Spatio-Temporal Graph for _Video Captioning_ with Knowledge distillation. CVPR 2020 [[code]][15.31]\n31. Squeezed Deep _6DoF Object Detection_ Using Knowledge Distillation. Felix, Heitor et al. arXiv:2003.13586\n32. Distilled Semantics for Comprehensive _Scene Understanding_ from Videos. Tosi, Fabio et al. arXiv:2003.14030\n33. Parallel WaveNet: Fast high-fidelity _speech synthesis_. Van et al. ICML 2018\n34. Distill Knowledge From NRSfM for Weakly Supervised _3D Pose_ Learning. Wang Chaoyang et al. ICCV 2019\n35. KD-MRI: A knowledge distillation framework for _image reconstruction_ and image restoration in MRI workflow. Murugesan et al. MIDL 2020\n36. Geometry-Aware Distillation for Indoor _Semantic Segmentation_. Jiao, Jianbo et al. CVPR 2019\n37. Teacher Supervises Students How to Learn From Partially Labeled Images for _Facial Landmark Detection_. ICCV 2019\n38. Distill Image _Dehazing_ with Heterogeneous Task Imitation. Hong, Ming et al. CVPR 2020\n39. Knowledge Distillation for _Action Anticipation_ via Label Smoothing. Camporese et al. arXiv:2004.07711\n40. More Grounded _Image Captioning_ by Distilling Image-Text Matching Model. Zhou, Yuanen et al. CVPR 2020\n41. Distilling Knowledge from Refinement in Multiple _Instance Detection_ Networks. Zeni, Luis Felipe & Jung, Claudio. arXiv:2004.10943\n42. Enabling Incremental Knowledge Transfer for _Object Detection_ at the Edge. arXiv:2004.05746\n43. Uninformed Students: Student-Teacher _Anomaly Detection_ with Discriminative Latent Embeddings. Bergmann, Paul et al. CVPR 2020\n44. TA-Student _VQA_: Multi-Agents Training by Self-Questioning. Xiong, Peixi & Wu Ying. CVPR 2020\n45. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. Jiang, Lu et al. ICML 2018\n46. A Multi-Task Mean Teacher for Semi-Supervised _Shadow Detection_. Chen, Zhihao et al. CVPR 2020 [[code]][15.48]\n47. Learning Lightweight _Face Detector_ with Knowledge Distillation. Zhang Shifeng et al. IEEE 2019\n48. Learning Lightweight _Pedestrian Detector_ with Hierarchical Knowledge Distillation. ICIP 2019\n49. Distilling _Object Detectors_ with Task Adaptive Regularization. Sun, Ruoyu et al. arXiv:2006.13108\n50. Intra-class Compactness Distillation for _Semantic Segmentation_. ECCV 2020\n51. DOPE: Distillation Of Part Experts for whole-body _3D pose estimation_ in the wild. ECCV 2020\n52. Self-similarity Student for Partial Label Histopathology Image _Segmentation_. ECCV 2020\n53. Robust _Re-Identification_ by Multiple Views Knowledge Distillation. Porrello et al. ECCV 2020 [[code]][15.58]\n54. LabelEnc: A New Intermediate Supervision Method for _Object Detection_. Hao, Miao et al. arXiv:2007.03282\n55. Optical Flow Distillation: Towards Efficient and Stable _Video Style Transfer_. Chen, Xinghao et al. ECCV 2020\n56. Adversarial Self-Supervised Learning for Semi-Supervised _3D Action Recognition_. Si, Chenyang et al. ECCV 2020\n57. Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks. Zhang, Yonggang et al. ICML 2020\n58. RGB-IR Cross-modality Person _ReID_ based on Teacher-Student GAN Mode. Zhang, Ziyue et al. arXiv:2007.07452\n59. _Defocus Blur Detection_ via Depth Distillation. Cun, Xiaodong & Pun, Chi-Man. ECCV 2020 [[code]][15.64]\n60. Boosting Weakly Supervised _Object Detection_ with Progressive Knowledge Transfer. Zhong, Yuanyi et al. ECCV 2020 [[code]][15.64]\n61. Weight Decay Scheduling and Knowledge Distillation for _Active Learning_. ECCV 2020\n62. Circumventing Outliers of AutoAugment with Knowledge Distillation. ECCV 2020\n63. Improving _Face Recognition_ from Hard Samples via Distribution Distillation Loss. ECCV 2020\n64. Exclusivity-Consistency Regularized Knowledge Distillation for _Face Recognition_. ECCV 2020\n65. Self-similarity Student for Partial Label Histopathology Image _Segmentation_. Cheng, Hsien-Tzu et al. ECCV 2020\n66. Deep Semi-supervised Knowledge Distillation for Overlapping Cervical Cell _Instance Segmentation_. Zhou, Yanning et al. arXiv:2007.10787 [[code]][15.70]\n67. Two-Level Residual Distillation based Triple Network for Incremental _Object Detection_. Yang, Dongbao et al. arXiv:2007.13428\n68. Towards Unsupervised _Crowd Counting_ via Regression-Detection Bi-knowledge Transfer. Liu, Yuting et al. ACM MM 2020\n69. Teacher-Critical Training Strategies for _Image Captioning_. Huang, Yiqing & Chen, Jiansheng. arXiv:2009.14405\n70. Object Relational Graph with Teacher-Recommended Learning for _Video Captioning_. Zhang, Ziqi et al. CVPR 2020\n71. Multi-Frame to Single-Frame: Knowledge Distillation for _3D Object Detection_. Wang Yue et al. ECCV 2020\n72. Residual Feature Distillation Network for Lightweight Image _Super-Resolution_. Liu, Jie et al. ECCV 2020\n73. Intra-Utterance Similarity Preserving Knowledge Distillation for Audio Tagging. Interspeech 2020\n74. Federated Model Distillation with Noise-Free Differential Privacy. arXiv:2009.05537\n75. _Long-tailed Recognition_ by Routing Diverse Distribution-Aware Experts. Wang, Xudong et al. arXiv:2010.01809\n76. Fast _Video Salient Object Detection_ via Spatiotemporal Knowledge Distillation. Yi, Tang & Yuan, Li. arXiv:2010.10027\n77. Multiresolution Knowledge Distillation for _Anomaly Detection_. Salehi et al. cvpr 2021\n78. Channel-wise Distillation for _Semantic Segmentation_. Shu, Changyong et al. arXiv: 2011.13256\n79. Teach me to segment with mixed supervision: Confident students become masters. Dolz, Jose et al. arXiv:2012.08051\n80. Invariant Teacher and Equivariant Student for Unsupervised _3D Human Pose Estimation_. Xu, Chenxin et al. AAAI 2021 [[code]][15.80]\n81. Training data-efficient _image transformers_ & distillation through attention. Touvron, Hugo et al. arXiv:2012.12877 [[code]][15.81]\n82. SID: Incremental Learning for Anchor-Free _Object Detection_ via Selective and Inter-Related Distillation. Peng, Can et al. arXiv:2012.15439\n83. PSSM-Distil: Protein Secondary Structure Prediction (PSSP) on Low-Quality PSSM by Knowledge Distillation with Contrastive Learning. Wang, Qin et al. AAAI 2021\n84. Diverse Knowledge Distillation for End-­to‐End _Person Search_. Zhang, Xinyu et al. AAAI 2021\n85. Enhanced _Audio Tagging_ via Multi­‐ to Single­‐Modal Teacher­‐Student Mutual Learning. Yin, Yifang et al. AAAI 2021\n86. Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. Li, Yige et al. ICLR 2021 [[code]][15.86]\n87. Unbiased Teacher for Semi-Supervised _Object Detection_. Liu, Yen-Cheng et al. ICLR 2021 [[code]][15.87]\n88. Localization Distillation for _Object Detection_. Zheng, Zhaohui et al. cvpr 2021 [[code]][15.88]\n89. Distilling Knowledge via Intermediate _Classifier_ Heads. Aryan & Amirali. arXiv:2103.00497\n90. Distilling _Object Detectors_ via Decoupled Features. (HUAWEI-Noah). CVPR 2021\n91. General Instance Distillation for _Object Detection_. Dai, Xing et al. CVPR 2021\n92. Multiresolution Knowledge Distillation for _Anomaly Detection_. Mohammadreza et al. CVPR 2021\n93. Student-Teacher Feature Pyramid Matching for Unsupervised _Anomaly Detection_. Wang, Guodong et al. arXiv:2103.04257\n94. Teacher-Explorer-Student Learning: A Novel Learning Method for _Open Set Recognition_. Jaeyeon Jang & Chang Ouk Kim. IEEE 2021\n95. Dense Relation Distillation with Context-aware Aggregation for Few-Shot _Object Detection_. Hu, Hanzhe et al. CVPR 2021 [[code]][15.95]\n96. Compressing _Visual-linguistic_ Model via Knowledge Distillation. Fang, Zhiyuan et al. arXiv:2104.02096\n97. Farewell to Mutual Information: Variational Distillation for Cross-Modal Person _Re-Identification_. Tian, Xudong et al. CVPR 2021\n98. Improving Weakly Supervised _Visual Grounding_ by Contrastive Knowledge Distillation. Wang, Liwei et al. CVPR 2021\n99. Orderly Dual-Teacher Knowledge Distillation for Lightweight _Human Pose Estimation_. Zhao, Zhongqiu et al. arXiv:2104.10414\n100. Boosting Light-Weight _Depth Estimation_ Via Knowledge Distillation. Hu, Junjie et al. arXiv:2105.06143\n101. Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge\nDistillation and Cross-modal Matching. Wu, Bofeng et al. arViv:2105.08252\n102. Revisiting Knowledge Distillation for Object Detection. Banitalebi-Dehkordi, Amin. arXiv: 2105.10633\n103. Towards Compact Single Image Super-Resolution via Contrastive Self-distillation. Yanbo, Wang et al. IJCAI 2021\n104. How many Observations are Enough? Knowledge Distillation for Trajectory Forecasting. Monti, Alessio et al. CVPR 2022\n\n\n### for NLP & Data-Mining\n\n1. Patient Knowledge Distillation for BERT Model Compression. Sun, Siqi et al. arXiv:1908.09355\n2. TinyBERT: Distilling BERT for Natural Language Understanding. Jiao, Xiaoqi et al. arXiv:1909.10351\n3. Learning to Specialize with Knowledge Distillation for Visual Question Answering. NeurIPS 2018\n4. Knowledge Distillation for Bilingual Dictionary Induction. EMNLP 2017\n5. A Teacher-Student Framework for Maintainable Dialog Manager. EMNLP 2018\n6. Understanding Knowledge Distillation in Non-Autoregressive Machine Translation. arxiv 2019\n7. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Sanh, Victor et al. arXiv:1910.01108\n8. Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. Turc, Iulia et al. arXiv:1908.08962\n9. On Knowledge distillation from complex networks for response prediction. Arora, Siddhartha et al. NAACL 2019\n10. Distilling the Knowledge of BERT for Text Generation. arXiv:1911.03829v1\n11. Understanding Knowledge Distillation in Non-autoregressive Machine Translation. arXiv:1911.02727\n12. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices. Sun, Zhiqing et al. ACL 2020\n13. Acquiring Knowledge from Pre-trained Model to Neural Machine Translation. Weng, Rongxiang et al. AAAI 2020\n14. TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval. Lu, Wenhao et al. KDD 2020\n15. Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation. Xu, Yige et al. arXiv:2002.10345\n16. FastBERT: a Self-distilling BERT with Adaptive Inference Time. Liu, Weijie et al. ACL 2020\n17. LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression. Mao, Yihuan et al. arXiv:2004.04124\n18. DynaBERT: Dynamic BERT with Adaptive Width and Depth. Hou, Lu et al. NeurIPS 2020\n19. Structure-Level Knowledge Distillation For Multilingual Sequence Labeling. Wang, Xinyu et al. ACL 2020\n20. Distilled embedding: non-linear embedding factorization using knowledge distillation. Lioutas, Vasileios et al. arXiv:1910.06720\n21. TinyMBERT: Multi-Stage Distillation Framework for Massive Multi-lingual NER. Mukherjee & Awadallah. ACL 2020\n22. Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation. Sun, Haipeng et al. arXiv:2004.10171\n23. Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation. Reimers, Nils & Gurevych, Iryna arXiv:2004.09813\n24. Distilling Knowledge for Fast Retrieval-based Chat-bots. Tahami et al. arXiv:2004.11045\n25. Single-\u002FMulti-Source Cross-Lingual NER via Teacher-Student Learning on Unlabeled Data in Target Language. ACL 2020\n26. Local Clustering with Mean Teacher for Semi-supervised Learning. arXiv:2004.09665\n27. Time Series Data Augmentation for Neural Networks by Time Warping with a Discriminative Teacher. arXiv:2004.08780 \n28. Syntactic Structure Distillation Pretraining For Bidirectional Encoders. arXiv: 2005.13482\n29. Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation. arXiv:2003.02877\n30. Distilling Neural Networks for Faster and Greener Dependency Parsing. arXiv:2006.00844\n31. Distilling Knowledge from Well-informed Soft Labels for Neural Relation Extraction. AAAI 2020 [[code]][16.32]\n32. More Grounded Image Captioning by Distilling Image-Text Matching Model. Zhou, Yuanen et al. CVPR 2020\n33. Multimodal Learning with Incomplete Modalities by Knowledge Distillation. Wang, Qi et al. KDD 2020\n34. Distilling the Knowledge of BERT for Sequence-to-Sequence ASR. Futami, Hayato et al. arXiv:2008.03822\n35. Contrastive Distillation on Intermediate Representations for Language Model Compression. Sun, Siqi et al. EMNLP 2020 [[code]][16.37]\n36. Noisy Self-Knowledge Distillation for Text Summarization. arXiv:2009.07032\n37. Simplified TinyBERT: Knowledge Distillation for Document Retrieval. arXiv:2009.07531\n38. Autoregressive Knowledge Distillation through Imitation Learning. arXiv:2009.07253\n39. BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover’s Distance. EMNLP 2020 [[code]][16.392]\n40. Interpretable Embedding Procedure Knowledge Transfer. Seunghyun Lee et al. AAAI 2021 [[code]][16.40]\n41. LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding. Fu, Hao et al. AAAI 2021\n42. Towards Zero-Shot Knowledge Distillation for Natural Language Processing. Ahmad et al. arXiv:2012.15495\n43. Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains. Pan, Haojie et al. AAAI 2021\n44. Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation. Feng, Lingyun et al. AAAI 2021\n45. Label Confusion Learning to Enhance Text Classification Models. Guo, Biyang et al. AAAI 2021\n46. NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application. Wu, Chuhan et al. kdd 2021\n\n### for RecSys\n\n1. Developing Multi-Task Recommendations with Long-Term Rewards via Policy Distilled Reinforcement Learning. Liu, Xi et al. arXiv:2001.09595\n2. A General Knowledge Distillation Framework for Counterfactual Recommendation via Uniform Data. Liu, Dugang et al. SIGIR 2020 [[Sildes]][16.35] [[code]][16.352]\n3. LightRec: a Memory and Search-Efficient Recommender System. Lian, Defu et al. WWW 2020\n4. Privileged Features Distillation at Taobao Recommendations. Xu, Chen et al. KDD 2020\n5. Next Point-of-Interest Recommendation on Resource-Constrained Mobile Devices. WWW 2020\n6. Adversarial Distillation for Efficient Recommendation with External Knowledge. Chen, Xu et al. ACM Trans, 2018\n7. Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System. Tang, Jiaxi et al. SIGKDD 2018\n8. A novel Enhanced Collaborative Autoencoder with knowledge distillation for top-N recommender systems. Pan, Yiteng et al. Neurocomputing 2019 [[code]][16.38]\n9. ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation. Mi, Fei et al. ACM RecSys 2020\n10. Ensembled CTR Prediction via Knowledge Distillation. Zhu, Jieming et al.(Huawei) CIKM 2020\n11. DE-RRD: A Knowledge Distillation Framework for Recommender System. Kang, Seongku et al. CIKM 2020 [[code]][16.39]\n12. Neural Compatibility Modeling with Attentive Knowledge Distillation. Song, Xuemeng et al. SIGIR 2018\n13. Binarized Collaborative Filtering with Distilling Graph Convolutional Networks. Wang, Haoyu et al. IJCAI 2019\n14. Collaborative Distillation for Top-N Recommendation. Jae-woong Lee, et al. CIKM 2019\n15. Distilling Structured Knowledge into Embeddings for Explainable and Accurate Recommendation. Zhang Yuan et al. WSDM 2020\n16. UMEC:Unified Model and Embedding Compression for Efficient Recommendation Systems. ICLR 2021\n17. Bidirectional Distillation for Top-K Recommender System. WWW 2021\n18. Privileged Graph Distillation for Cold-start Recommendation. SIGIR 2021\n19. Topology Distillation for Recommender System [KDD 2021]\n20. Conditional Attention Networks for Distilling Knowledge Graphs in Recommendation [CIKM 2021]\n21. Explore, Filter and Distill: Distilled Reinforcement Learning in Recommendation [CIKM 2021] [[Video]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3459637.3481917)[[Code]](https:\u002F\u002Fgithub.com\u002Fmodriczhang\u002FDRL-Rec)\n22. Graph Structure Aware Contrastive Knowledge Distillation for Incremental Learning in Recommender Systems[CIKM 2021]\n23. Conditional Graph Attention Networks for Distilling and Refining Knowledge Graphs in Recommendation[CIKM 2021]\n24. Target Interest Distillation for Multi-Interest Recommendation [CIKM 2022] [[Video]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3511808.3557464) [[Code]](https:\u002F\u002Fgithub.com\u002FTHUwangcy\u002FReChorus\u002Ftree\u002FCIKM22)\n25. KDCRec: Knowledge Distillation for Counterfactual Recommendation Via Uniform Data [TKDE 2022] [[Code]](https:\u002F\u002Fgithub.com\u002Fdgliu\u002FTKDE_KDCRec)\n26. Revisiting Graph based Social Recommendation: A Distillation Enhanced Social Graph Network[WWW 2022] [[Code]](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002Fuqmsr67wqurpnre\u002FSupplementary%20Material.zip?dl=0)\n27. False Negative Distillation and Contrastive Learning for Personalized Outfit Recommendation [Arxiv 2110.06483]\n28. Dual Correction Strategy for Ranking Distillation in Top-N Recommender System[ArXiv 2109.03459v1]\n29. Scene-adaptive Knowledge Distillation for Sequential Recommendation via Differentiable Architecture Search. Chen, Lei et al.[ArXiv 2107.07173v1]\n30. Interpolative Distillation for Unifying Biased and Debiased Recommendation [SIGIR 2022] [[Video]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3477495.3532002) [[Code]](https:\u002F\u002Fgithub.com\u002FDingseewhole\u002FInterD_master)\n31. FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative Joint Matrix Factorization and Knowledge Distillation[Arxiv 2205.02359v1]\n32. On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation[SIGIR 2022] [[Code]](https:\u002F\u002Fgithub.com\u002Fxiaxin1998\u002FOD-Rec)\n33. Cross-Task Knowledge Distillation in Multi-Task Recommendation[AAAI 2022]\n34. Toward Understanding Privileged Features Distillation in Learning-to-Rank [NIPS 2022]\n35. Debias the Black-box: A Fair Ranking Framework via Knowledge Distillation [WISE 2022]\n36. Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings[SIGIR 2022] [[Code]](https:\u002F\u002Fgithub.com\u002Fstaoxiao\u002Flibvq)\n37. AutoFAS: Automatic Feature and Architecture Selection for Pre-Ranking System [KDD 2022]\n38. An Incremental Learning framework for Large-scale CTR Prediction[RecSys 22]\n39. Directed Acyclic Graph Factorization Machines for CTR Prediction via Knowledge Distillation [WSDM 2023] [[Code]](https:\u002F\u002Fgithub.com\u002Frucaibox\u002Fdagfm)\n40. Unbiased Knowledge Distillation for Recommendation [WSDM 2023] [[Code]](https:\u002F\u002Fgithub.com\u002Fchengang95\u002FUnKD)\n41. DistilledCTR: Accurate and scalable CTR prediction model through model distillation [ESWA 2022]\n43. Top-aware recommender distillation with deep reinforcement learning [Information Sciences 2021]\n\n## Model Pruning or Quantization\n\n1. Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-regression. ECCV 2016\n2. N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning. Ashok, Anubhav et al. ICLR 2018\n3. Slimmable Neural Networks. Yu, Jiahui et al. ICLR 2018\n4. Co-Evolutionary Compression for Unpaired Image Translation. Shu, Han et al. ICCV 2019\n5. MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning. Liu, Zechun et al. ICCV 2019\n6. LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning. ICLR 2020\n7. Pruning with hints: an efficient framework for model acceleration. ICLR 2020\n8. Training convolutional neural networks with cheap convolutions and online distillation. arXiv:1909.13063\n9. Cooperative Pruning in Cross-Domain Deep Neural Network Compression. [Chen, Shangyu][17.9] et al. IJCAI 2019\n10. QKD: Quantization-aware Knowledge Distillation. Kim, Jangho et al. arXiv:1911.12491v1\n11. Neural Network Pruning with Residual-Connections and Limited-Data. Luo, Jian-Hao & Wu, Jianxin. CVPR 2020\n12. Training Quantized Neural Networks with a Full-precision Auxiliary Module. Zhuang, Bohan et al. CVPR 2020\n13. Towards Effective Low-bitwidth Convolutional Neural Networks. Zhuang, Bohan et al. CVPR 2018\n14. Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations. Zhuang, Bohan et al. arXiv:1908.04680\n15. Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation. Le et al. arXiv:2006.11487 [[code]][17.15]\n16. Knowledge Distillation Beyond Model Compression. Choi, Arthur et al. arxiv:2007.01493\n17. Distillation Guided Residual Learning for Binary Convolutional Neural Networks. Ye, Jianming et al. ECCV 2020\n18. Cascaded channel pruning using hierarchical self-distillation. Miles & Mikolajczyk. BMVC 2020\n19. TernaryBERT: Distillation-aware Ultra-low Bit BERT. Zhang, Wei et al. EMNLP 2020\n20. Weight Distillation: Transferring the Knowledge in Neural Network Parameters. arXiv:2009.09152\n21. Stochastic Precision Ensemble: Self-­‐Knowledge Distillation for Quantized Deep Neural Networks. Boo, Yoonho et al. AAAI 2021\n22. Binary Graph Neural Networks. Bahri, Mehdi et al. CVPR 2021\n23. Self-Damaging Contrastive Learning. Jiang, Ziyu et al. ICML 2021\n24. Information Theoretic Representation Distillation. Miles et al. BMVC 2022 [[code]][19.10]\n25. Distillation Guided Residual Learning for Binary Convolutional Neural Networks. Ye, Jianming et al. ECCV 2020\n26. Cascaded channel pruning using hierarchical self-distillation. Miles & Mikolajczyk. BMVC 2020\n27. TernaryBERT: Distillation-aware Ultra-low Bit BERT. Zhang, Wei et al. EMNLP 2020\n28. Weight Distillation: Transferring the Knowledge in Neural Network Parameters. arXiv:2009.09152\n29. Stochastic Precision Ensemble: Self-­‐Knowledge Distillation for Quantized Deep Neural Networks. Boo, Yoonho et al. AAAI 2021\n30. Binary Graph Neural Networks. Bahri, Mehdi et al. CVPR 2021\n31. Self-Damaging Contrastive Learning. Jiang, Ziyu et al. ICML 2021\n\n## Beyond\n\n1. Do deep nets really need to be deep?. Ba,Jimmy, and Rich Caruana. NeurIPS 2014\n2. When Does Label Smoothing Help? Müller, Rafael, Kornblith, and Hinton. NeurIPS 2019\n3. Towards Understanding Knowledge Distillation. Phuong, Mary and Lampert, Christoph. ICML 2019\n4. Harnessing deep neural networks with logical rules. ACL 2016\n5. Adaptive Regularization of Labels. Ding, Qianggang et al. arXiv:1908.05474\n6. Knowledge Isomorphism between Neural Networks. Liang, Ruofan et al. arXiv:1908.01581\n7. (survey) Modeling Teacher-Student Techniques in Deep Neural Networks for Knowledge Distillation. arXiv:1912.13179\n8. Understanding and Improving Knowledge Distillation. Tang, Jiaxi et al. arXiv:2002.03532\n9. The State of Knowledge Distillation for Classification. Ruffy, Fabian and Chahal, Karanbir. arXiv:1912.10850 [[code]][18.11]\n10. Explaining Knowledge Distillation by Quantifying the Knowledge. [Zhang, Quanshi][18.13] et al. CVPR 2020\n11. DeepVID: deep visual interpretation and diagnosis for image classifiers via knowledge distillation. IEEE Trans, 2019.\n12. On the Unreasonable Effectiveness of Knowledge Distillation: Analysis in the Kernel Regime. Rahbar, Arman et al. arXiv:2003.13438\n13. (survey) Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks. Wang, Lin & Yoon, Kuk-Jin. arXiv:2004.05937\n14. Why distillation helps: a statistical perspective. arXiv:2005.10419\n15. Transferring Inductive Biases through Knowledge Distillation. Abnar, Samira et al. arXiv:2006.00555\n16. Does label smoothing mitigate label noise? Lukasik, Michal et al. ICML 2020\n17. An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation. Das, Deepan et al. arXiv:2006.03810\n18. (survey) Knowledge Distillation: A Survey. Gou, Jianping et al. IJCV 2021\n19. Does Adversarial Transferability Indicate Knowledge Transferability? Liang, Kaizhao et al. arXiv:2006.14512\n20. On the Demystification of Knowledge Distillation: A Residual Network Perspective. Jha et al. arXiv:2006.16589\n21. Enhancing Simple Models by Exploiting What They Already Know. Dhurandhar et al. ICML 2020\n22. Feature-Extracting Functions for Neural Logic Rule Learning. Gupta & Robles-Kelly.arXiv:2008.06326\n23. On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective. SeongUk et al. arXiv:2009.04120\n24. Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher. Ji, Guangda & Zhu, Zhanxing. NeurIPS 2020\n25. In Defense of Feature Mimicking for Knowledge Distillation. Wang, Guo-Hua et al. arXiv:2011.0142\n26. Solvable Model for Inheriting the Regularization through Knowledge Distillation. Luca Saglietti & Lenka Zdeborova. arXiv:2012.00194\n27. Undistillable: Making A Nasty Teacher That CANNOT Teach Students. ICLR 2021\n28. Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning. Allen-Zhu, Zeyuan & Li, Yuanzhi.(Microsoft) arXiv:2012.09816\n29. Student-Teacher Learning from Clean Inputs to Noisy Inputs. Hong, Guanzhe et al. CVPR 2021\n30. Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study. ICLR 2021 [[project]][18.32]\n31. Model Distillation for Revenue Optimization: Interpretable Personalized Pricing. Biggs, Max et al. ICML 2021\n32. A statistical perspective on distillation. Aditya et al(Google). ICML 2021\n33. (survey) Data-Free Knowledge Transfer: A Survey. Liu, Yuang et al. arXiv:2112.15278\n34. Knowledge Distillation Beyond Model Compression. Choi, Sarfraz et. al. arxiv:2007.01493\n\n## Distiller Tools\n\n1. [Neural Network Distiller][18.8]: A Python Package For DNN Compression Research. arXiv:1910.12232\n2. [TextBrewer][18.12]: An Open-Source Knowledge Distillation Toolkit for Natural Language Processing. HIT and iFLYTEK. arXiv:2002.12620\n3. [torchdistill][18.28]: A Modular, Configuration-Driven Framework for Knowledge Distillation. \n4. [KD-Lib][18.29]: A PyTorch library for Knowledge Distillation, Pruning and Quantization. Shen, Het et al. arXiv:2011.14691\n5. [Knowledge-Distillation-Zoo][18.30]\n6. [RepDistiller][18.31]\n7. [classification distiller][18.11]\n\n---\nNote: All papers' pdf can be found and downloaded on [arXiv](https:\u002F\u002Farxiv.org\u002Fsearch\u002F), [Bing](https:\u002F\u002Fwww.bing.com) or [Google](https:\u002F\u002Fwww.google.com).\n\nSource: \u003Chttps:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation>\n\nThanks for all contributors:\n\n[![yuang](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_87ff5905ebb5.png)](https:\u002F\u002Fgithub.com\u002FFLHonker)  [![lioutasb](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_a4785b56a6ab.png)](https:\u002F\u002Fgithub.com\u002Flioutasb)  [![KaiyuYue](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_cecdd7dd10b6.png)](https:\u002F\u002Fgithub.com\u002FKaiyuYue)  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_5cfdea6fc64f.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Fshivmgg)  [![cardwing](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_dd5a3e48487b.png)](https:\u002F\u002Fgithub.com\u002Fcardwing)  [![jaywonchung](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_9118b67499c7.png)](https:\u002F\u002Fgithub.com\u002Fjaywonchung)  [![ZainZhao](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_ef7215262927.png)](https:\u002F\u002Fgithub.com\u002FZainZhao)  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_10e51735c5a1.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Fforjiuzhou) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_2a9add4561aa.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Ffmthoker) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_74239c291260.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Fcardwing)  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_12c5737c49a2.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002FPyJulie)  \n\n\nContact: Yuang Liu (frankliu624![](https:\u002F\u002Fres.cloudinary.com\u002Fflhonker\u002Fimage\u002Fupload\u002Fv1605363963\u002Ffrankio\u002Fat1.png)outlook.com)\n\n\n[1.10]: https:\u002F\u002Fgithub.com\u002Fyuanli2333\u002FTeacher-free-Knowledge-Distillation\n[1.26]: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fmeta_pseudo_labels\n[1.30]: https:\u002F\u002Fgithub.com\u002Fdwang181\u002Factive-mixup\n[1.36]: https:\u002F\u002Fgithub.com\u002Fbigaidream-projects\u002Frole-kd\n[1.41]: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fieg\n[1.42]: https:\u002F\u002Fgithub.com\u002Fxuguodong03\u002FSSKD\n[1.43]: https:\u002F\u002Fgithub.com\u002Fbrjathu\u002FSKD\n[1.46]: https:\u002F\u002Fgithub.com\u002FDSLLcode\u002FDSLL\n[1.59]: https:\u002F\u002Fgithub.com\u002Fxuguodong03\u002FUNIXKD\n[1.62]: https:\u002F\u002Fgithub.com\u002Fjhoon-oh\u002Fkd_data\n[1.65]: https:\u002F\u002Fgithub.com\u002Fwinycg\u002FHSAKD\n[2.15]: https:\u002F\u002Fgithub.com\u002Fclovaai\u002Foverhaul-distillation\n[2.27]: https:\u002F\u002Fgithub.com\u002FJetRunner\u002FBERT-of-Theseus\n[2.30]: https:\u002F\u002Fgithub.com\u002Fzhouzaida\u002Fchannel-distillation\n[2.31]: https:\u002F\u002Fgithub.com\u002FKaiyuYue\u002Fmgd\n[2.34]: https:\u002F\u002Fgithub.com\u002Faztc\u002FFNKD\n[2.44]: https:\u002F\u002Fgithub.com\u002FDefangChen\u002FSemCKD\n[2.45]: https:\u002F\u002Fgithub.com\u002FBUPT-GAMMA\u002FCPF\n[2.46]: https:\u002F\u002Fgithub.com\u002Fihollywhy\u002FDistillGCN.PyTorch\n[2.47]: https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fattention-feature-distillation\n[4.4]: https:\u002F\u002Fgithub.com\u002FHobbitLong\u002FRepDistiller\n[4.9]: https:\u002F\u002Fgithub.com\u002Ftaoyang1122\u002FMutualNet\n[4.15]: https:\u002F\u002Fgithub.com\u002Fwinycg\u002FMCL\n[5.11]: https:\u002F\u002Fgithub.com\u002Falinlab\u002Fcs-kd\n[5.13]: https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FSelf-PU\n[5.19]: https:\u002F\u002Fgithub.com\u002FMingiJi\u002FFRSKD\n[5.20]: https:\u002F\u002Fgithub.com\u002FVegeta2020\u002FSE-SSD\n[6.6]: https:\u002F\u002Fgithub.com\u002Fcardwing\u002FCodes-for-IntRA-KD\n[6.7]: https:\u002F\u002Fgithub.com\u002Fpassalis\u002Fpkth\n[8.20]: https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fgan-compression\n[8.23]: https:\u002F\u002Fgithub.com\u002FEvgenyKashin\u002Fstylegan2-distillation\n[8.25]: https:\u002F\u002Fgithub.com\u002Fterarachang\u002FACCV_TinyGAN\n[8.26]: https:\u002F\u002Fgithub.com\u002FSJLeo\u002FDMAD\n[8.28]: https:\u002F\u002Fgithub.com\u002Fmshahbazi72\u002FcGANTransfer\n[8.29]: https:\u002F\u002Fgithub.com\u002Fsnap-research\u002FCAT\n[10.6]: https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDeepInversion\n[10.9]: https:\u002F\u002Fgithub.com\u002Fsnudatalab\u002FKegNet\n[10.12]: https:\u002F\u002Fgithub.com\u002Fxushoukai\u002FGDFQ\n[10.16]: https:\u002F\u002Fgithub.com\u002Fleaderj1001\u002FBillion-scale-semi-supervised-learning\n[10.28]: https:\u002F\u002Fgithub.com\u002FFLHonker\u002FZAQ-code\n[10.35]: https:\u002F\u002Fgithub.com\u002Famirgholami\u002FZeroQ\n[10.30]: https:\u002F\u002Fgithub.com\u002Fcake-lab\u002Fdatafree-model-extraction\n[10.36]: https:\u002F\u002Fgithub.com\u002Fzju-vipa\u002FDataFree\n[11.8]: https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FAGD\n[12.2]: https:\u002F\u002Fgithub.com\u002Fhankook\u002FSLA\n[12.5]: https:\u002F\u002Fgithub.com\u002Fsthalles\u002FPyTorch-BYOL\n[12.6]: https:\u002F\u002Fgithub.com\u002FXHWXD\u002FDBSN\n[12.9]: https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FAdversarial-Contrastive-Learning\n[12.10]: https:\u002F\u002Fgithub.com\u002FUMBCvision\u002FCompRess\n[12.11]: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fsimclr\n[12.12]: https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Ftree\u002Fmaster\u002Fmodels\u002Fofficial\u002Fdetection\u002Fprojects\u002Fself_training\n[12.13]: https:\u002F\u002Fgithub.com\u002FUMBCvision\u002FISD\n[12.25]: http:\u002F\u002Fgithub.com\u002FSYangDong\u002Ftse-t\n[12.29]: https:\u002F\u002Fweihonglee.github.io\u002FProjects\u002FKD-MTL\u002FKD-MTL.htm\n[12.30]: https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAMTML-KD-code\n[12.37]: https:\u002F\u002Fgithub.com\u002FAnTuo1998\u002FAE-KD\n[12.41]: https:\u002F\u002Fgithub.com\u002Fahmdtaha\u002Fknowledge_evolution\n[13.24]: https:\u002F\u002Fgithub.com\u002Fzju-vipa\u002FKamalEngine\n[14.27]: https:\u002F\u002Fgithub.com\u002Fnjulus\u002FReFilled\n[14.29]: https:\u002F\u002Fgithub.com\u002Fbradyz\u002Ftask-distillation\n[14.30]: https:\u002F\u002Fgithub.com\u002Fwanglixilinx\u002FDSRL\n[14.32]: https:\u002F\u002Fgithub.com\u002FVisionLearningGroup\u002FDomain2Vec\n[14.40]: https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FSemi-Supervised-Transfer-Learning\n[14.42]: https:\u002F\u002Fgithub.com\u002FJoyHuYY1412\u002FDDE_CIL\n[14.46]:https:\u002F\u002Fgithub.com\u002FSunbaoli\u002Fdsr-distillation\n[15.5]: https:\u002F\u002Fgithub.com\u002Flucidrains\u002Fbyol-pytorch\n[15.28]: https:\u002F\u002Fgithub.com\u002Fmingsun-tse\u002Fcollaborative-distillation\n[15.29]: https:\u002F\u002Fgithub.com\u002Fjaywonchung\u002FShadowTutor\n[15.31]: https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FSTGraph\n[15.48]: https:\u002F\u002Fgithub.com\u002FeraserNut\u002FMTMT\n[15.58]: https:\u002F\u002Fgithub.com\u002Faimagelab\u002FVKD\n[15.64]: https:\u002F\u002Fgithub.com\u002Fvinthony\u002Fdepth-distillation\n[15.64]: https:\u002F\u002Fgithub.com\u002Fmikuhatsune\u002Fwsod_transfer\n[15.70]: https:\u002F\u002Fgithub.com\u002FSIAAAAAA\u002FMMT-PSM\n[15.80]: https:\u002F\u002Fgithub.com\u002Fsjtuxcx\u002FITES\n[15.81]: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeit\n[15.86]: https:\u002F\u002Fgithub.com\u002Fbboylyg\u002FNAD\n[15.87]: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Funbiased-teacher\n[15.88]: https:\u002F\u002Fgithub.com\u002FHikariTJU\u002FLD\n[15.95]: https:\u002F\u002Fgithub.com\u002Fhzhupku\u002FDCNet\n[16.32]: https:\u002F\u002Fgithub.com\u002Fzzysay\u002FKD4NRE\n[16.35]: http:\u002F\u002Fcsse.szu.edu.cn\u002Fstaff\u002Fpanwk\u002Fpublications\u002FConference-SIGIR-20-KDCRec-Slides.pdf\n[16.352]:https:\u002F\u002Fgithub.com\u002Fdgliu\u002FSIGIR20_KDCRec\n[16.37]: https:\u002F\u002Fgithub.com\u002Fintersun\u002FCoDIR\n[16.38]: https:\u002F\u002Fgithub.com\u002Fgraytowne\u002Frank_distill\n[16.39]: https:\u002F\u002Fgithub.com\u002FSeongKu-Kang\u002FDE-RRD_CIKM20\n[16.392]:https:\u002F\u002Fgithub.com\u002Flxk00\u002FBERT-EMD\n[16.40]: https:\u002F\u002Fgithub.com\u002Fsseung0703\u002FIEPKT\n[17.9]: https:\u002F\u002Fcsyhhu.github.io\u002F\n[17.15]: https:\u002F\u002Fgithub.com\u002Flehduong\u002Fginp\n[18.8]: https:\u002F\u002Fgithub.com\u002FIntelLabs\u002Fdistiller\n[18.11]: https:\u002F\u002Fgithub.com\u002Fkaranchahal\u002Fdistiller\n[18.12]: https:\u002F\u002Fgithub.com\u002Fairaria\u002FTextBrewer\n[18.13]: http:\u002F\u002Fqszhang.com\u002F\n[18.28]: https:\u002F\u002Fgithub.com\u002Fyoshitomo-matsubara\u002Ftorchdistill\n[18.29]: https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\n[18.30]: https:\u002F\u002Fgithub.com\u002FAberHu\u002FKnowledge-Distillation-Zoo\n[18.31]: https:\u002F\u002Fgithub.com\u002FHobbitLong\u002FRepDistiller\n[18.32]: http:\u002F\u002Fzhiqiangshen.com\u002Fprojects\u002FLS_and_KD\u002Findex.html  \n[2.55]: https:\u002F\u002Fgithub.com\u002FArchipLab-LinfengZhang\u002FTask-Oriented-Feature-Distillation\n[19.10]: https:\u002F\u002Fgithub.com\u002Froymiles\u002FITRD\n","# 令人惊叹的知识蒸馏\n\n\n![计数器](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber-658-green) \n[![星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FFLHonker\u002FAwesome-Knowledge-Distillation?label=star&style=social)](https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation)\n\n- [令人惊叹的知识蒸馏](#awesome-knowledge-distillation)\n  - [知识的不同形式](#different-forms-of-knowledge)\n    - [来自logits的知识](#knowledge-from-logits)\n    - [来自中间层的知识](#knowledge-from-intermediate-layers)\n    - [基于图的方法](#graph-based)\n    - [互信息与在线学习](#mutual-information--online-learning)\n    - [自蒸馏](#self-kd)\n    - [结构化知识](#structural-knowledge)\n    - [特权信息](#privileged-information)\n  - [知识蒸馏 + GAN](#kd--gan)\n  - [知识蒸馏 + 元学习](#kd--meta-learning)\n  - [无数据知识蒸馏](#data-free-kd)\n  - [知识蒸馏 + 自动机器学习](#kd--automl)\n  - [知识蒸馏 + 强化学习](#kd--rl)\n  - [知识蒸馏 + 自监督学习](#kd--self-supervised)\n  - [多教师与集成知识蒸馏](#multi-teacher-and-ensemble-kd)\n    - [知识融合（KA）- 浙大VIPA](#knowledge-amalgamationka---zju-vipa)\n  - [跨模态 \u002F DA \u002F 增量学习](#cross-modal--da--incremental-learning)\n  - [知识蒸馏的应用](#application-of-kd)\n    - [用于NLP和数据挖掘](#for-nlp--data-mining)\n    - [用于推荐系统](#for-recsys)\n  - [模型剪枝或量化](#model-pruning-or-quantization)\n  - [超越](#beyond)\n  - [蒸馏工具](#distiller-tools)\n\n## 知识的不同形式\n\n### 从 logits 中提取的知识\n\n1. 神经网络中的知识蒸馏。Hinton 等人，arXiv:1503.02531\n2. 基于蒸馏的噪声标签学习。Li, Yuncheng 等人，ICCV 2017\n3. 分代训练深度神经网络：更宽容的教师培养出更好的学生。arXiv:1805.05551\n4. 从教师处学习度量：用于图像嵌入的紧凑网络。Yu, Lu 等人，CVPR 2019\n5. 关系知识蒸馏。Park, Wonpyo 等人，CVPR 2019\n6. 针对响应预测从复杂网络中进行知识蒸馏的研究。Arora, Siddhartha 等人，NAACL 2019\n7. 关于知识蒸馏的有效性。Cho, Jang Hyun 和 Hariharan, Bharath，arXiv:1910.01348。ICCV 2019\n8. 重访知识蒸馏：无教师框架（通过标签平滑正则化重审知识蒸馏）。Yuan, Li 等人，CVPR 2020 [[代码]][1.10]\n9. 通过教师助手改进知识蒸馏：弥合学生与教师之间的差距。Mirzadeh 等人，arXiv:1902.03393\n10. 集成分布蒸馏。ICLR 2020\n11. 知识蒸馏中的噪声协作。ICLR 2020\n12. 关于使用知识蒸馏压缩 U-net 的研究。arXiv:1812.00249\n13. 带有噪声学生的自训练提升了 ImageNet 分类性能。Xie, Qizhe 等人（Google），CVPR 2020\n14. 变分学生：在知识蒸馏框架中学习紧凑且稀疏的网络。AAAI 2020\n15. 准备课程：通过更好的监督提升知识蒸馏效果。arXiv:1911.07471\n16. 标签的自适应正则化。arXiv:1908.05474\n17. 云端上的正负样本压缩。Xu, Yixing 等人（华为），NeurIPS 2019\n18. 快照蒸馏：单代内的师生优化。Yang, Chenglin 等人，CVPR 2019\n19. QUEST：用于知识迁移的量化嵌入空间。Jain, Himalaya 等人，arXiv:2020\n20. 条件师生学习。Z. Meng 等人，ICASSP 2019\n21. 子类蒸馏。Müller, Rafael 等人，arXiv:2002.03936\n22. 边距蒸馏：基于边距的 softmax 蒸馏。Svitov, David 和 Alyamkin, Sergey，arXiv:2003.02586\n23. 一种极其简单的知识蒸馏方法。Gao, Mengya 等人，MLR 2018\n24. 序列级知识蒸馏。Kim, Yoon 和 Rush, Alexander M.，arXiv:1606.07947\n25. 通过知识迁移增强自监督学习。Noroozi, Mehdi 等人，CVPR 2018\n26. 元伪标签。Pham, Hieu 等人，ICML 2020 [[代码]][1.26]\n27. 神经网络比人类评分者更高效的教师：针对黑盒模型的数据高效知识蒸馏的主动混合。CVPR 2020 [[代码]][1.30]\n28. 用于单声道语音分离的蒸馏二值神经网络。Chen Xiuyi 等人，IJCNN 2018\n29. 教师-班级网络：一种神经网络压缩机制。Malik 等人，arXiv:2004.03281\n30. 深度监督下的知识协同。Sun, Dawei 等人，CVPR 2019\n31. 它认为重要的就是重要的：鲁棒性通过输入梯度传递。Chan, Alvin 等人，CVPR 2020\n32. 三元损失用于知识蒸馏。Oki, Hideki 等人，IJCNN 2020\n33. 面向知识蒸馏的角色导向数据增强。ICLR 2020 [[代码]][1.36]\n34. 蒸馏尖峰：脉冲神经网络中的知识蒸馏。arXiv:2005.00288\n35. 改进的噪声学生训练用于自动语音识别。Park 等人，arXiv:2005.09629\n36. 从轻量级教师处学习以实现高效知识蒸馏。Yuang Liu 等人，arXiv:2005.09163\n37. ResKD：残差引导的知识蒸馏。Li, Xuewei 等人，arXiv:2006.04719\n38. 从严重的标签噪声中提炼有效监督。Zhang, Zizhao 等人，CVPR 2020 [[代码]][1.41]\n39. 知识蒸馏与自监督学习的结合。Xu, Guodong 等人，ECCV 2020 [[代码]][1.42]\n40. 针对少样本学习的自监督知识蒸馏。arXiv:2006.09785 [[代码]][1.43]\n41. 带有噪声类别标签的学习用于实例分割。ECCV 2020\n42. 通过对比式知识蒸馏改善弱监督视觉定位。Wang, Liwei 等人，arXiv:2007.01951\n43. 深度流式标签学习。Wang, Zhen 等人，ICML 2020 [[代码]][1.46]\n44. 在对学习者行为信息有限的情况下进行教学。Zhang, Yonggang 等人，ICML 2020\n45. 在群体表征学习中进行可区分性蒸馏。Zhang, Manyuan 等人，ECCV 2020\n46. 知识蒸馏中的局部相关性一致性。ECCV 2020\n47. 基于素数的自适应蒸馏。Zhang, Youcai 等人，ECCV 2020\n48. 一刀切并不适用：自适应标签平滑。Krothapalli 等人，arXiv:2009.06432\n49. 从带有噪声标签的数据中学习如何学习。Li, Junnan 等人，CVPR 2019\n50. 通过一致来对抗噪声标签：一种具有共同正则化的联合训练方法。Wei, Hongxin 等人，CVPR 2020\n51. 通过多分支多样性增强进行在线知识蒸馏。Li, Zheng 等人，ACCV 2020\n52. Pea-KD：参数高效且准确的知识蒸馏。arXiv:2009.14822\n53. 通过自我知识蒸馏扩展标签平滑正则化。Wang, Jiyue 等人，arXiv:2009.05226\n54. 球面知识蒸馏。Guo, Jia 等人，arXiv:2010.07485\n55. 软标签数据集蒸馏和文本数据集蒸馏。arXiv:1910.02551\n56. Wasserstein 对比表征蒸馏。Chen, Liqun 等人，CVPR 2021\n57. 基于不确定性感知混合的计算高效知识蒸馏。Xu, Guodong 等人，CVPR 2021 [[代码]][1.59]\n58. 知识精炼：从解耦标签中学习。Ding, Qianggang 等人，AAAI 2021\n59. 火箭发射：一个通用且高效的框架，用于训练表现良好的轻量级网络。Zhou, Guorui 等人，AAAI 2018\n60. 为长尾识别蒸馏虚拟样本。He, Yin-Yin 等人，CVPR 2021\n61. 长尾学习的平衡知识蒸馏。Zhang, Shaoyu 等人，arXiv:2014.10510\n62. 比较 Kullback-Leibler 散度和均方误差损失在知识蒸馏中的应用。Kim, Taehyeon 等人，IJCAI 2021 [[代码]][1.62]\n63. 并非所有知识都同等重要。Li, Ziyun 等人，arXiv:2106.01489\n64. 知识蒸馏：好老师要有耐心和一致性。Beyer 等人，arXiv:2106.05237v1\n65. 层次化自监督增强型知识蒸馏。Yang 等人，IJCAI 2021 [[代码]][1.65]\n\n### 中间层知识\n\n1. Fitnets：轻量级深度网络的提示。Romero, Adriana 等人。arXiv:1412.6550\n2. 更加关注注意力：通过注意力迁移提升卷积神经网络性能。Zagoruyko 等人。ICLR 2017\n3. 知识投影：用于高效设计更轻量、更快速的深度神经网络。Zhang, Zhi 等人。arXiv:1710.09505\n4. 知识蒸馏的馈赠：快速优化、网络压缩与迁移学习。Yim, Junho 等人。CVPR 2017\n5. 喜欢你所喜欢的：基于神经元选择性迁移的知识蒸馏。Huang, Zehao & Wang, Naiyan。2017\n6. 复杂网络的释义：通过因子迁移进行网络压缩。Kim, Jangho 等人。NeurIPS 2018\n7. 基于雅可比匹配的知识迁移。ICML 2018\n8. 使用奇异值分解的自监督知识蒸馏。Lee, Seung Hyun 等人。ECCV 2018\n9. 基于概率知识迁移的深度表示学习。Passalis 等人。ECCV 2018\n10. 用于知识迁移的变分信息蒸馏。Ahn, Sungsoo 等人。CVPR 2019\n11. 基于实例关系图的知识蒸馏。Liu, Yufan 等人。CVPR 2019\n12. 基于路径约束优化的知识蒸馏。Jin, Xiao 等人。ICCV 2019\n13. 保持相似性的知识蒸馏。Tung, Frederick 和 Mori Greg。ICCV 2019\n14. MEAL：基于对抗学习的多模型集成。Shen, Zhiqiang、He, Zhankui 和 Xue Xiangyang。AAAI 2019\n15. 特征蒸馏的全面革新。Heo, Byeongho 等人。ICCV 2019 [[代码]][2.15]\n16. 特征图级别的在线对抗知识蒸馏。ICML 2020\n17. 基于细粒度特征模仿的目标检测器蒸馏。ICLR 2020\n18. 知识挤压式对抗网络压缩。Changyong, Shu 等人。AAAI 2020\n19. 分阶段知识蒸馏。Kulkarni, Akshay 等人。arXiv:1911.06786\n20. 来自内部表征的知识蒸馏。AAAI 2020\n21. 知识流：超越你的老师。ICLR 2019\n22. LIT：用于模型压缩的中间表征学习训练。ICML 2019\n23. 通过噪声特征蒸馏提升迁移学习的对抗鲁棒性。Chin, Ting-wu 等人。arXiv:2002.02998\n24. 带有内部蒸馏的背包剪枝。Aflalo, Yonathan 等人。arXiv:2002.08258\n25. 残差知识蒸馏。Gao, Mengya 等人。arXiv:2002.09168\n26. 基于适应性实例归一化知识蒸馏。Yang, Jing 等人。arXiv:2003.04289\n27. 赫拉克勒斯之Bert：通过渐进式模块替换压缩Bert。Xu, Canwen 等人。arXiv:2002.02925 [[代码]][2.27]\n28. 锋火知识蒸馏：脉冲神经网络中的知识蒸馏。arXiv:2005.00727\n29. 面向深度神经网络的广义贝叶斯后验期望蒸馏。Meet 等人。arXiv:2005.08110\n30. 特征图级别的在线对抗知识蒸馏。Chung, Inseop 等人。ICML 2020\n31. 通道蒸馏：面向知识蒸馏的通道级注意力。Zhou, Zaida 等人。arXiv:2006.01683 [[代码]][2.30]\n32. 匹配引导的蒸馏。ECCV 2020 [[代码]][2.31]\n33. 可微分特征聚合搜索用于知识蒸馏。ECCV 2020\n34. 交互式知识蒸馏。Fu, Shipeng 等人。arXiv:2007.01476\n35. 面向图像分类的特征归一化知识蒸馏。ECCV 2020 [[代码]][2.34]\n36. 面向深度神经网络的层级知识蒸馏。Li, Hao Ting 等人。《应用科学》杂志，2019年\n37. 基于特征图的知识蒸馏用于图像分类。Chen, Weichun 等人。ACCV 2018\n38. 知识蒸馏中高效的卷积核迁移。Qian, Qi 等人。arXiv:2009.14416\n39. 视频动作识别中参数域与频谱域的协同蒸馏。arXiv:2009.06902\n40. 基于卷积核的渐进式蒸馏用于加法神经网络。Xu, Yixing 等人。NeurIPS 2020\n41. 基于引导式对抗对比学习的特征蒸馏。Bai, Tao 等人。arXiv:2009.09922\n42. 关注特征，更快地迁移CNN。Wang, Kafeng 等人。ICLR 2019\n43. 多层级知识蒸馏。Ding, Fei 等人。arXiv:2012.00573\n44. 带语义校准的跨层蒸馏。Chen, Defang 等人。AAAI 2021 [[代码]][2.44]\n45. 面向多出口架构的协调一致密集知识蒸馏训练。Wang, Xinglu 和 Li, Yingming。AAAI 2021\n46. 基于师生模型混合前向的稳健知识迁移。Song, Liangchen 等人。AAAI 2021\n47. 展示、注意并蒸馏：基于注意力的特征匹配知识蒸馏。Ji, Mingi 等人。AAAI 2021 [[代码]][2.47]\n48. MINILMv2：用于压缩预训练Transformer的多头自注意力关系蒸馏。Wang, Wenhui 等人。arXiv:2012.15828\n49. ALP-KD：基于注意力的层级投影用于知识蒸馏。Peyman 等人。AAAI 2021\n50. 基于层级聚类寻找信息丰富的提示点以进行知识蒸馏。Reyhan 等人。arXiv:2103.00053\n51. 解决蒸馏过程中师生知识差异。Han, Jiangfan 等人。arXiv:2103.16844\n52. 基于进化知识蒸馏的学生网络学习。Zhang, Kangkai 等人。arXiv:2103.13811\n53. 通过知识回顾进行知识蒸馏。Chen, Pengguang 等人。CVPR 2021\n54. 基于稀疏表示匹配的知识蒸馏。Tran 等人。arXiv:2103.17012\n55. 面向任务的特征蒸馏。Zhang 等人。NeurIPS 2020 [[代码]][2.55]\n56. 来自未标注数据的对抗性知识迁移。Gupta 等人。ACM-MM 2020 [代码](https:\u002F\u002Fgithub.com\u002Fagupt013\u002Fakt)\n57. 知识蒸馏作为高效预训练：更快收敛、更高数据效率和更好迁移能力。He 等人。CVPR 2020\n58. PDF-Distil：在基于特征的知识蒸馏中纳入预测分歧以用于目标检测。Zhang 等人。BMVC 2021 [代码](https:\u002F\u002Fgithub.com\u002FZHANGHeng19931123\u002FMutualGuide)\n\n### 基于图的方法\n\n1. 基于图的知识蒸馏：多头注意力网络。Lee, Seunghyun 和 Song, Byung Cheol，arXiv:1907.02226\n2. 通过多任务知识蒸馏的图表示学习。arXiv:1911.05700\n3. 利用图进行深度几何知识蒸馏。arXiv:1911.03080\n4. 更好更快：通过图蒸馏从多个自监督学习任务中迁移知识用于视频分类。IJCAI 2018\n5. 从图卷积网络中蒸馏知识。Yang, Yiding 等人，CVPR 2020 [[代码]][2.46]\n6. 利用外部知识进行显著性预测。Zhang, Yifeng 等人，arXiv:2007.13839\n7. 通过学习从外部知识迁移实现多标签零样本分类。Huang, He 等人，arXiv:2007.15610\n8. 图卷积网络上的可靠数据蒸馏。Zhang, Wentao 等人，ACM SIGMOD 2020\n9. 图卷积网络的互学教学。Zhan, Kun 等人，Future Generation Computer Systems，2021\n10. DistilE：为更快速、更经济的推理而蒸馏知识图嵌入。Zhu, Yushan 等人，arXiv:2009.05912\n11. Distill2Vec：利用知识蒸馏的动态图表示学习。Antaris, Stefanos 和 Rafailidis, Dimitrios，arXiv:2011.05664\n12. 自蒸馏图神经网络。Chen, Yuzhao 等人，arXiv:2011.02255\n13. 迭代式图自蒸馏。iclr 2021\n14. 提取图神经网络的知识并超越它：一种有效的知识蒸馏框架。Yang, Cheng 等人，WWW 2021 [[代码]][2.45]\n15. 带有特权信息的RGB-D视频中基于图蒸馏的动作检测。Luo, Zelun 等人，ECCV 2018\n16. 基于图一致性的均值教学用于无监督域适应的人体重识别。Liu, Xiaobin 和 Zhang, Shiliang，IJCAI 2021\n\n### 互信息与在线学习\n\n1. 用于知识蒸馏的关联一致性。Peng, Baoyun 等人，ICCV 2019\n2. 保持相似性的知识蒸馏。Tung, Frederick 和 Mori Greg，ICCV 2019\n3. 用于知识迁移的变分信息蒸馏。Ahn, Sungsoo 等人，CVPR 2019\n4. 对比表示蒸馏。Tian, Yonglong 等人，ICLR 2020 [[RepDistill]][4.4]\n5. 通过协作学习进行在线知识蒸馏。Guo, Qiushan 等人，CVPR 2020\n6. 同辈协作学习用于在线知识蒸馏。Wu, Guile 和 Gong, Shaogang，AAAI 2021\n7. 通过密集跨层互蒸馏进行知识迁移。ECCV 2020\n8. MutualNet：通过来自网络宽度和分辨率的互学自适应卷积网络。Yang, Taojiannan 等人，ECCV 2020 [[代码]][4.9]\n9. AMLN：基于对抗的互学网络用于在线知识蒸馏。ECCV 2020\n10. 通过在线互知实现跨模态医学图像分割。Li, Kang 等人，AAAI 2021\n11. 联邦知识蒸馏。Seo, Hyowoon 等人，arXiv:2011.02367\n12. 利用互均值教学进行无监督图像分割。Wu, Zhichao 等人，arXiv:2012.08922\n13. 用于自监督和半监督学习的指数移动平均归一化。Cai, Zhaowei 等人，arXiv:2101.08482\n14. 用于半监督语义分割的鲁棒互学。Zhang, Pan 等人，arXiv:2106.00609\n15. 用于视觉表示学习的互对比学习。Yang 等人，AAAI 2022 [[代码]][4.15]\n16. 信息论视角下的表示蒸馏。Miles 等人，BMVC 2022 [[代码]][19.10]\n\n### 自蒸馏\n\n1. Moonshine：使用廉价卷积进行蒸馏。Crowley, Elliot J. 等人，NeurIPS 2018\n2. 成为自己老师：通过自蒸馏提升卷积神经网络性能。Zhang, Linfeng 等人，ICCV 2019\n3. 通过自注意力蒸馏学习轻量级车道检测CNN。Hou, Yuenan 等人，ICCV 2019\n4. BAM！重生的多任务网络用于自然语言理解。Clark, Kevin 等人，ACL 2019，短文\n5. 自然语言处理中的自知识蒸馏。Hahn, Sangchul 和 Choi, Heeyoul，arXiv:1908.01851\n6. 重新思考数据增强：自监督与自蒸馏。Lee, Hankook 等人，ICLR 2020\n7. MSD：通过深度神经网络内的多分类器进行多自蒸馏学习。arXiv:1911.09418\n8. 自蒸馏在希尔伯特空间中增强正则化。Mobahi, Hossein 等人，NeurIPS 2020\n9. MINILM：用于预训练Transformer任务无关压缩的深度自注意力蒸馏。Wang, Wenhui 等人，arXiv:2002.10957\n10. 通过自知识蒸馏正则化类别预测。CVPR 2020 [[代码]][5.11]\n11. 自蒸馏作为实例特定的标签平滑。Zhang, Zhilu 和 Sabuncu, Mert R.，NeurIPS 2020\n12. Self-PU：自我增强且校准的正类-未标记训练。Chen, Xuxi 等人，ICML 2020 [[代码]][5.13]\n13. S2SD：用于深度度量学习的同时性相似性自蒸馏。Karsten 等人，ICML 2021\n14. 用于弱监督目标检测的全面注意力自蒸馏。Huang, Zeyi 等人，NeurIPS 2020\n15. 基于蒸馏的多出口架构训练。Phuong, Mary 和 Lampert, Christoph H.，ICCV 2019\n16. 用于半监督域适应的成对自蒸馏。iclr 2021\n17. SEED：自监督蒸馏。ICLR 2021\n18. 自特征正则化：无需教师模型的自特征蒸馏。Fan, Wenxuan 和 Hou, Zhenyan，arXiv:2103.07350\n19. 通过自我教导完善自我：利用自知识蒸馏进行特征精炼。Ji, Mingi 等人，CVPR 2021 [[代码]][5.19]\n20. SE-SSD：从点云中自集成单阶段目标检测器。Zheng, Wu 等人，CVPR 2021 [[代码]][5.20]\n21. 结合批次知识集成的自蒸馏可提升ImageNet分类性能。Ge, Yixiao 等人，CVPR 2021\n22. 通过对比自蒸馏实现紧凑的单张图像超分辨率。IJCAI 2021\n23. DearKD：面向视觉Transformer的数据高效早期知识蒸馏 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.12997.pdf)\n24. 使用复用教师分类器进行知识蒸馏 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.14001.pdf)\n25. 基于上一个迷你批次的自蒸馏用于一致性正则化 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.16172.pdf)\n26. 解耦合知识蒸馏 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.08679.pdf)\n\n### 结构知识\n\n1. 复杂网络的释义：通过因子迁移进行网络压缩。金章浩等。NeurIPS 2018\n2. 关系知识蒸馏。朴元杓等。CVPR 2019\n3. 基于实例关系图的知识蒸馏。刘宇凡等。CVPR 2019\n4. 对比表示蒸馏。田永龙等。ICLR 2020\n5. 通过结构化暗知识教授教学。ICLR 2020\n6. 道路标记分割中的区域间亲和力蒸馏。侯元楠等。CVPR 2020 [[代码]][6.6]\n7. 基于信息流建模的异构知识蒸馏。帕萨利斯等。CVPR 2020 [[代码]][6.7]\n8. 用于知识迁移的非对称度量学习。布德尼克、马特乌什与阿夫里西斯，扬尼斯。arXiv:2006.16331\n9. 知识蒸馏中的局部相关性一致性。ECCV 2020\n10. 少样本类别增量学习。陶晓宇等。CVPR 2020\n11. 用于图像到图像转换的语义关系保持知识蒸馏。ECCV 2020\n12. 可解释的前景目标搜索作为知识蒸馏。ECCV 2020\n13. 通过类别结构改进知识蒸馏。ECCV 2020\n14. 基于关系知识蒸馏的少样本类别增量学习。董松林等。AAAI 2021\n15. 补充关系对比蒸馏。朱金国等。CVPR 2021\n16. 信息论表示蒸馏。迈尔斯等。BMVC 2022 [[代码]][19.10]\n\n### 特权信息\n\n1. 利用特权信息学习：相似性控制与知识转移。瓦普尼克，弗拉基米尔与劳夫，伊兹麦洛夫。MLR 2015\n2. 统一蒸馏与特权信息。洛佩兹-帕斯，大卫等。ICLR 2016\n3. 通过蒸馏与量化进行模型压缩。波利诺，安东尼奥等。ICLR 2018\n4. KDGAN：基于生成对抗网络的知识蒸馏。王小杰。NeurIPS 2018\n5. 使用更少帧实现高效视频分类。巴尔德瓦杰，什韦塔等。CVPR 2019\n6. 在多任务学习中保留特权信息。唐峰毅等。KDD 2019\n7. 一种基于特权信息的回归与分类通用元损失函数。阿西夫，阿米娜等。arXiv:1811.06885\n8. 通过生成对抗网络的模型蒸馏进行私有知识转移。高迪与卓成。AAAI 2020\n9. 面向在线动作检测的特权知识蒸馏。赵培森等。cvpr 2021\n10. 带有特权条款的学习的对抗性蒸馏。王小杰等。TPAMI 2019\n\n## KD + GAN\n\n1. 通过条件对抗网络的知识蒸馏加速浅层稀疏网络训练。徐征等。arXiv:1709.00513\n2. KTAN：知识迁移对抗网络。刘沛业等。arXiv:1810.08126\n3. KDGAN：基于生成对抗网络的知识蒸馏。王小杰。NeurIPS 2018\n4. 可移植学生网络的对抗式学习。王云鹤等。AAAI 2018\n5. 对抗式网络压缩。贝拉吉安尼斯等。ECCV 2018\n6. 跨模态蒸馏：以条件生成对抗网络为例。ICASSP 2018\n7. 基于外部知识的高效推荐的对抗式蒸馏。TOIS 2018\n8. 使用条件对抗网络加速学生网络训练。徐征等。BMVC 2018\n9. DAFL：无数据的学生网络学习。陈涵婷等。ICCV 2019\n10. MEAL：基于对抗学习的多模型集成。沈志强等。AAAI 2019\n11. 支持决策边界的对抗样本知识蒸馏。许炳浩等。AAAI 2019\n12. 利用真实标签：基于对抗模仿的知识蒸馏方法用于事件检测。刘健等。AAAI 2019\n13. 对抗鲁棒蒸馏。戈德布鲁姆，米卡等。AAAI 2020\n14. GAN-知识蒸馏用于单阶段目标检测。洪伟等。arXiv:1906.08467\n15. 终身GAN：面向条件图像生成的持续学习。昆杜等。arXiv:1908.03884\n16. 使用知识蒸馏压缩GAN。阿圭纳尔多，安吉琳等。arXiv:1902.00159\n17. 特征图级在线对抗知识蒸馏。ICML 2020\n18. MineGAN：从GAN向目标域有效转移知识，即使只有少量图片。王亚星等。CVPR 2020\n19. 为图像转换蒸馏便携式生成对抗网络。陈涵婷等。AAAI 2020\n20. GAN压缩：用于交互式条件GAN的高效架构。朱俊彦等。CVPR 2020 [[代码]][8.20]\n21. 对抗式网络压缩。贝拉吉安尼斯等。ECCV 2018\n22. P-KDGAN：基于GAN的渐进式知识蒸馏，用于单类新颖性检测。张志伟等。IJCAI 2020\n23. StyleGAN2蒸馏用于前馈式图像处理。维亚佐韦茨基等。ECCV 2020 [[代码]][8.23]\n24. HardGAN：一款雾霾感知型表示蒸馏GAN，用于单张图像去雾。ECCV 2020\n25. TinyGAN：为条件图像生成蒸馏BigGAN。ACCV 2020 [[代码]][8.25]\n26. 通过可微掩码与协同注意力蒸馏学习高效GAN。李绍杰等。arXiv:2011.08382 [[代码]][8.26]\n27. 自监督GAN压缩。余冲与池杰夫。arXiv:2007.01491\n28. 教师的作用远不止教学：压缩图像到图像模型。CVPR 2021 [[代码]][8.29]\n29. 通过cGAN生成的样本进行知识蒸馏与迁移，用于图像分类与回归。丁欣等。arXiv:2104.03164\n30. 内容感知GAN压缩。刘宇晨等。CVPR 2021\n\n## KD + 元学习\n\n1. 少样本知识蒸馏用于高效网络压缩。李天宏等。CVPR 2020\n2. 学习什么以及在哪里进行迁移。张允勋等，ICML 2019\n3. 跨学习过程的知识迁移。莫雷诺，巴勃罗·G等。ICLR 2019\n4. 语义感知的知识保存用于零样本基于草图的图像检索。刘青等。ICCV 2019\n5. 多样性与合作：用于少样本分类的集成方法。德沃尔尼克，尼基塔等。ICCV 2019\n6. 知识表征：用于知识蒸馏的高效、稀疏先验知识表示。arXiv:1911.05329v1\n7. 用于生成式建模的渐进式知识蒸馏。ICLR 2020\n8. 通过交叉蒸馏进行少样本网络压缩。AAAI 2020\n9. MetaDistiller：基于元学习的自顶向下蒸馏实现网络自我增强。刘本林等。ECCV 2020\n10. 带有类内知识迁移的少样本学习。arXiv:2008.09892\n11. 基于知识迁移的少样本目标检测。金健旭等。arXiv:2008.12496\n12. 蒸馏后的单次联邦学习。arXiv:2009.07999\n13. Meta-KD：跨领域语言模型压缩的元知识蒸馏框架。潘浩杰等。arXiv:2012.01266\n14. 用于少样本知识蒸馏的渐进式网络嫁接。沈承超等。AAAI 2021\n\n## 无数据知识蒸馏\n\n1. 面向深度神经网络的无数据知识蒸馏。NeurIPS 2017\n2. 深度网络中的零样本知识蒸馏。ICML 2019\n3. DAFL：学生网络的无数据学习。ICCV 2019\n4. 基于对抗信念匹配的零样本知识迁移。Micaelli、Paul 和 Storkey, Amos。NeurIPS 2019\n5. 梦境蒸馏：一种与数据无关的模型压缩框架。Kartikeya 等人。ICML 2019\n6. 梦境蒸馏：通过 DeepInversion 实现的无数据知识迁移。Yin, Hongxu 等人。CVPR 2020 [[代码]][10.6]\n7. 无数据对抗蒸馏。Fang, Gongfan 等人。CVPR 2020\n8. 内在的知识：无数据模型压缩方法。Haroush, Matan 等人。CVPR 2020\n9. 在没有任何可观察数据的情况下进行知识提取。Yoo, Jaemin 等人。NeurIPS 2019 [[代码]][10.9]\n10. 通过 Group-Stack 双 GAN 进行无数据知识融合。CVPR 2020\n11. DeGAN：用于从训练好的分类器中检索代表性样本的数据增强 GAN。Addepalli, Sravanti 等人。arXiv:1912.11960\n12. 基于生成的低比特位无数据量化。Xu, Shoukai 等人。ECCV 2020 [[代码]][10.12]\n13. 这个数据集并不存在：从生成图像中训练模型。arXiv:1911.02888\n14. MAZE：使用零阶梯度估计的无数据模型窃取攻击。Sanjay 等人。arXiv:2005.03161\n15. 生成式教学网络：通过学习生成合成训练数据加速神经架构搜索。Such 等人。ECCV 2020\n16. 十亿规模的半监督图像分类学习。FAIR。arXiv:1905.00546 [[代码]][10.16]\n17. 基于对抗知识蒸馏的无数据网络量化。Choi, Yoojin 等人。CVPRW 2020\n18. 面向文本分类的对抗自监督无数据蒸馏。EMNLP 2020\n19. 通过无数据知识迁移实现精确量化和剪枝。arXiv:2010.07334\n20. 利用数据增强 GAN 进行分割任务的无数据知识蒸馏。Bhogale 等人。arXiv:2011.00809\n21. 分层无数据 CNN 压缩。Horton, Maxwell 等人（Apple Inc.）。cvpr 2021\n22. 任意迁移集合在无数据知识蒸馏中的有效性。Nayak 等人。WACV 2021\n23. 学校式学习：多教师知识反演用于无数据量化。Li, Yuhang 等人。cvpr 2021\n24. 大规模生成式无数据蒸馏。Luo, Liangchen 等人。cvpr 2021\n25. 域印象：一种无需源数据的域适应方法。Kurmi 等人。WACV 2021\n26. 在野外学习学生网络。（HUAWEI-Noah）。CVPR 2021\n27. 无数据知识蒸馏用于图像超分辨率。（HUAWEI-Noah）。CVPR 2021\n28. 零样本对抗量化。Liu, Yuang 等人。CVPR 2021 [[代码]][10.28]\n29. 面向语义分割的无源域适应。Liu, Yuang 等人。CVPR 2021\n30. 无数据模型提取。Jean-Baptiste 等人。CVPR 2021 [[代码]][10.30]\n31. 深入数据：以有效替代训练的方式进行黑盒攻击。CVPR 2021\n32. 使用无标签对抗扰动结合泰勒近似进行零样本知识蒸馏。Li, Kang 等人。IEEE Access，2021。\n33. 半真半假蒸馏用于类增量语义分割。Huang, Zilong 等人。arXiv:2104.00875\n34. 双判别器对抗蒸馏用于无数据模型压缩。Zhao, Haoran 等人。TCSVT 2021\n35. 穿透梯度：通过 GradInversion 恢复图像批次。Yin, Hongxu 等人。CVPR 2021\n36. 对比式模型反演用于无数据知识蒸馏。Fang, Gongfan 等人。IJCAI 2021 [[代码]][10.36]\n37. 面向图神经网络的无图知识蒸馏。Deng, Xiang 和 Zhang, Zhongfei。arXiv:2105.07519\n38. 基于决策的黑盒模式下的零样本知识蒸馏。Wang Zi。ICML 2021\n39. 面向异构联邦学习的无数据知识蒸馏。Zhu, Zhuangdi 等人。ICML 2021\n\n\n其他无数据模型压缩：\n\n- 面向深度神经网络的无数据参数剪枝。Srinivas, Suraj 等人。arXiv:1507.06149\n- 通过权重均衡和偏置校正实现的无数据量化。Nagel, Markus 等人。ICCV 2019\n- DAC：卷积网络的无数据自动加速。Li, Xin 等人。WACV 2019\n- 一种保护隐私的 DNN 剪枝与移动加速框架。Zhan, Zheng 等人。arXiv:2003.06513\n- ZeroQ：一种新颖的零样本量化框架。Cai 等人。CVPR 2020 [[代码]][10.35]\n- 为无数据量化多样化样本生成。Zhang, Xiangguo 等人。CVPR 2021\n\n## 知识蒸馏 + 自动机器学习\n\n1. 通过集成学习改进神经架构搜索图像分类器。Macko, Vladimir 等人。arXiv:1903.06236\n2. 基于知识蒸馏的分块监督神经架构搜索。Li, Changlin 等人。CVPR 2020\n3. 通过神经架构搜索实现接近最优的知识蒸馏。Kang, Minsoo 等人。AAAI 2020\n4. 寻找更好的学生来学习蒸馏知识。Gu, Jindong 和 Tresp, Volker arXiv:2001.11612\n5. 通过知识蒸馏规避 AutoAugment 的异常值。Wei, Longhui 等人。arXiv:2003.11342\n6. 通过可变换架构搜索进行网络剪枝。Dong, Xuanyi 和 Yang, Yi。NeurIPS 2019\n7. 搜索以蒸馏：珍珠无处不在，只是肉眼看不见而已。Liu Yu 等人。CVPR 2020\n8. AutoGAN-Distiller：搜索以压缩生成对抗网络。Fu, Yonggan 等人。ICML 2020 [[代码]][11.8]\n9. Joint-DetNAS：用 NAS、剪枝和动态蒸馏升级你的检测器。CVPR 2021\n\n## 知识蒸馏 + 强化学习\n\n1. N2N 学习：通过策略梯度强化学习实现网络到网络的压缩。Ashok, Anubhav 等人。ICLR 2018\n2. 知识流动：超越你的老师。Liu, Iou-jen 等人。ICLR 2019\n3. 跨学习过程的知识转移。Moreno, Pablo G 等人。ICLR 2019\n4. 通过随机网络蒸馏进行探索。Burda, Yuri 等人。ICLR 2019\n5. 针对强化学习的周期性群体内知识蒸馏。Hong, Zhang-Wei 等人。arXiv:2002.00149\n6. 在同伴之间传递异质知识：一种模型蒸馏方法。Xue, Zeyue 等人。arXiv:2002.02202\n7. 代理经验回放：面向分布式强化学习的联邦蒸馏。Cha, han 等人。arXiv:2005.06105\n8. 双重策略蒸馏。Lai, Kwei-Herng 等人。IJCAI 2020\n9. 通过强化学习实现师生课程学习：预测医院住院患者的入院地点。El-Bouri, Rasheed 等人。ICML 2020\n10. 面向知识蒸馏的强化多教师选择。Yuan, Fei 等人。AAAI 2021\n11. 通过 Oracle 策略蒸馏实现订单执行的通用交易。Fang, Yuchen 等人。AAAI 2021\n12. 基于强化知识蒸馏的弱监督深度回归跟踪器域适应。Dunnhofer 等人。IEEE RAL\n\n## KD + 自监督\n\n1. 逆转循环：通过增强的单目蒸馏实现自监督深度立体视觉。ECCV 2020\n2. 基于输入变换的自监督标签增强。Lee, Hankook 等人。ICML 2020 [[代码]][12.2]\n3. 通过选择性自监督自训练改进目标检测。Li, Yandong 等人。ECCV 2020\n4. 从自监督学习中蒸馏视觉先验。Zhao, Bingchen 和 Wen, Xin。ECCVW 2020\n5. 自举自己的潜在表示：一种新的自监督学习方法。Grill 等人。arXiv:2006.07733 [[代码]][12.5]\n6. 无配对的深度图像去噪学习。Wu, Xiaohe 等人。arXiv:2008.13711 [[代码]][12.6]\n7. SSKD：用于跨域自适应行人重识别的自监督知识蒸馏。Yin, Junhui 等人。arXiv:2009.05972\n8. 通过从在线自我解释中蒸馏知识进行内省式学习。Gu, Jindong 等人。ACCV 2020\n9. 通过对抗对比学习实现稳健的预训练。Jiang, Ziyu 等人。NeurIPS 2020 [[代码]][12.9]\n10. CompRess：通过压缩表征进行自监督学习。Koohpayegani 等人。NeurIPS 2020 [[代码]][12.10]\n11. 大型自监督模型是强大的半监督学习者。Che, Ting 等人。NeurIPS 2020 [[代码]][12.11]\n12. 重新思考预训练与自训练。Zoph, Barret 等人。NeurIPS 2020 [[代码]][12.12]\n13. ISD：通过迭代相似性蒸馏进行自监督学习。Tejankar 等人。cvpr 2021 [[代码]][12.13]\n14. 动量²教师：带有动量统计的动量教师，用于自监督学习。Li, Zeming 等人。arXiv:2101.07525\n15. 超越自监督：一种简单而有效的网络蒸馏替代方案，用于改进骨干网络。Cui, Cheng 等人。arXiv:2103.05959\n16. 通过组合式对比学习蒸馏视听知识。Chen, Yanbei 等人。CVPR 2021\n17. DisCo：利用蒸馏对比学习修复轻量级模型上的自监督学习。Gao, Yuting 等人。arXiv:2104.09124\n18. 用于半监督医学图像分割的自集成对比学习。Xiang, Jinxi 等人。arXiv:2105.12924\n19. 基于交叉伪监督的半监督语义分割。Chen, Xiaokang 等人。CPVR 2021\n20. 来自未标记数据的对抗性知识迁移。Gupta 等人。ACM-MM 2020 [代码](https:\u002F\u002Fgithub.com\u002Fagupt013\u002Fakt)\n\n## 多教师与集成KD\n\n1. 从多个教师网络中学习。You, Shan 等人。KDD 2017\n2. 单教师多学生学习。You, Shan 等人。AAAI 2018\n3. 通过即时原生集成进行知识蒸馏。Lan, Xu 等人。NeurIPS 2018\n4. 面向深度学习的私有训练数据的半监督知识迁移。ICLR 2017\n5. 知识适应：教授适应能力。Arxiv:1702.02052\n6. 深度模型压缩：从噪声教师那里蒸馏知识。Sau, Bharat Bhusan 等人。arXiv:1610.09650\n7. 平均教师是更好的榜样：加权平均一致性目标可改善半监督深度学习效果。Tarvainen, Antti 和 Valpola, Harri。NeurIPS 2017\n8. 再生神经网络。Furlanello, Tommaso 等人。ICML 2018\n9. 深度互学。Zhang, Ying 等人。CVPR 2018\n10. 深度神经网络的协作学习。Song, Guocong 和 Chai, Wei。NeurIPS 2018\n11. 数据蒸馏：迈向全监督学习。Radosavovic, Ilija 等人。CVPR 2018\n12. 基于知识蒸馏的多语言神经机器翻译。ICLR 2019\n13. 用蒸馏统一异构分类器。Vongkulbhisal 等人。CVPR 2019\n14. 蒸馏后的行人重识别：迈向更可扩展的系统。Wu, Ancong 等人。CVPR 2019\n15. 多样性与合作：面向少样本分类的集成方法。Dvornik, Nikita 等人。ICCV 2019\n16. 用于网络问答系统的两阶段多教师知识蒸馏模型压缩。Yang, Ze 等人。WSDM 2020\n17. FEED：用于知识蒸馏的特征级集成。Park, SeongUk 和 Kwak, Nojun。AAAI 2020\n18. 随机性和跳跃连接改善知识迁移。Lee, Kwangjin 等人。ICLR 2020\n19. 与多样化同伴进行在线知识蒸馏。Chen, Defang 等人。AAAI 2020\n20. 海德拉：为模型蒸馏保持集成多样性。Tran, Linh 等人。arXiv:2001.04694\n21. 具有自适应推理成本的蒸馏分层神经网络集成。Ruiz, Adria 等人。arXiv:2003.01474\n22. 从声学模型集成中蒸馏知识，用于联合CTC-注意力端到端语音识别。Gao, Yan 等人。arXiv:2005.09310\n23. 通过多模态知识发现进行大规模少样本学习。ECCV 2020\n24. 协作学习加速StyleGAN嵌入。Guan, Shanyan 等人。arXiv:2007.01758\n25. 用于半监督目标检测的时序自集成教师。Chen, Cong 等人。IEEE 2020 [[代码]][12.25]\n26. 双教师：整合域内与域外教师，用于标注高效的 心脏分割。MICCAI 2020\n27. 联合渐进式知识蒸馏与无监督域适应。Nguyen-Meidine 等人。WACV 2020\n28. 基于师生网络的半监督学习，用于广义属性预测。Shin, Minchul 等人。ECCV 2020\n29. 用于多任务学习的知识蒸馏。Li, WeiHong 和 Bilen, Hakan。arXiv:2007.06889 [[项目]][12.29]\n30. 自适应多教师多层次知识蒸馏。Liu, Yuang 等人。Neurocomputing 2020 [[代码]][12.30]\n31. 利用知识蒸馏进行在线集成模型压缩。ECCV 2020\n32. 从多位专家那里学习：面向长尾分类的自定进度知识蒸馏。ECCV 2020\n33. 团体知识转移：在边缘设备上协同训练大型CNN。He, Chaoyang 等人。arXiv:2007.14513\n34. 使用多名助教进行密集引导的知识蒸馏。Son, Wonchul 等人。arXiv:2009.08825\n35. ProxylessKD：直接知识蒸馏，继承分类器用于人脸识别。Shi, Weidong 等人。arXiv:2011.00265\n36. 同意分歧：梯度空间中的自适应集成知识蒸馏。Du, Shangchen 等人。NeurIPS 2020 [[代码]][12.37]\n37. 为知识蒸馏强化多教师选择。Yuan, Fei 等人。AAAI 2021\n38. 基于多教师网络的类增量实例分割。Gu, Yanan 等人。AAAI 2021\n39. 通过多次知识转移进行师生协作学习。Sun, Liyuan 等人。arXiv:2101.08471\n40. 利用跨类知识传播高效传递条件GAN。Shahbaziet al. CVPR 2021 [[代码]][8.28]\n41. 神经网络中的知识进化。Taha, Ahmed 等人。CVPR 2021 [[代码]][12.41]\n42. 通过在线知识蒸馏蒸馏出强大的学生模型。Li, Shaojie 等人。arXiv:2103.14473\n\n### 知识融合（KA）- zju-VIPA\n\n[VIPA - KA][13.24]\n\n1. 面向综合分类的知识融合。沈成超等。AAAI 2019\n2. 融合过滤后的知识：从多任务教师中学习任务定制的学生模型。叶静文等。IJCAI 2019\n3. 基于共同特征学习的异构网络知识融合。罗思慧等。IJCAI 2019\n4. 学生变大师：用于联合场景解析、深度估计等任务的知识融合。叶静文等。CVPR 2019\n5. 通过自适应知识融合从异构教师中定制学生网络。ICCV 2019\n6. 基于组堆叠双GAN的数据无依赖知识融合。CVPR 2020\n\n## 跨模态 \u002F 知识蒸馏 \u002F 增量学习\n\n1. SoundNet：从无标签视频中学习声音表示——SoundNet架构。Aytar, Yusuf 等人。NeurIPS 2016\n2. 用于监督迁移的跨模态蒸馏。Gupta, Saurabh 等人。CVPR 2016\n3. 在自然场景下利用跨模态迁移进行语音情感识别。Albanie, Samuel 等人。ACM MM 2018\n4. 利用无线电信号进行穿墙人体姿态估计。Zhao, Mingmin 等人。CVPR 2018\n5. 用于视觉问答任务的紧凑型三线性交互。Do, Tuong 等人。ICCV 2019\n6. 用于动作识别的跨模态知识蒸馏。Thoker, Fida Mohammad 和 Gall, Juerge。ICIP 2019\n7. 学习映射几乎任何事物。Salem, Tawfiq 等人。arXiv:1909.06928\n8. 面向零样本草图检索的语义感知知识保持。Liu, Qing 等人。ICCV 2019\n9. UM-Adapt：使用对抗性跨任务蒸馏的无监督多任务适应。Kundu 等人。ICCV 2019\n10. CrDoCo：基于跨域一致性的像素级域迁移。Chen, Yun-Chun 等人。CVPR 2019\n11. XD：面向多语言句子嵌入的跨语言知识蒸馏。ICLR 2020\n12. 通过软微调实现有效的领域知识迁移。Zhao, Zhichen 等人。arXiv:1909.02236\n13. 只需 ASR：用于唇读的跨模态蒸馏。Afouras 等人。arXiv:1911.12747v1\n14. 用于半监督领域适应的知识蒸馏。arXiv:1908.07355\n15. 通过师生学习进行端到端语音识别的领域适应。Meng, Zhong 等人。arXiv:2001.01798\n16. 使用教师进行聚类对齐的无监督领域适应。ICCV 2019\n17. 用于知识迁移的注意力桥接网络。Li, Kunpeng 等人。ICCV 2019\n18. 基于知识蒸馏的无配对多模态分割。Dou, Qi 等人。arXiv:2001.03111\n19. 多源蒸馏式领域适应。Zhao, Sicheng 等人。arXiv:1911.11554\n20. 从无到有：跨模态哈希的无监督知识蒸馏。Hu, Hengtong 等人。CVPR 2020\n21. 通过自训练改进语义分割。Zhu, Yi 等人。arXiv:2004.14960\n22. 语音到文本适应：迈向高效的跨模态蒸馏。arXiv:2005.08213\n23. 联合渐进式知识蒸馏与无监督领域适应。arXiv:2005.07839\n24. 将知识作为先验：针对缺乏优质知识数据集的跨模态知识泛化。Zhao, Long 等人。CVPR 2020\n25. 基于师生学习的大规模领域适应。Li, Jinyu 等人。arXiv:1708.05466\n26. 利用弱标签数据进行大规模视听声音学习。Fayek, Haytham M. 和 Kumar, Anurag。IJCAI 2020\n27. 通过关系匹配蒸馏跨任务知识。Ye, Han-Jia 等人。CVPR 2020 [[代码]][14.27]\n28. 基于多流网络的动作识别模态蒸馏。Garcia, Nuno C. 等人。ECCV 2018\n29. 通过任务蒸馏进行领域适应。Zhou, Brady 等人。ECCV 2020 [[代码]][14.29]\n30. 用于语义分割的双重超分辨率学习。Wang, Li 等人。CVPR 2020 [[代码]][14.30]\n31. 针对部分领域适应的自适应累积知识迁移。Jing, Taotao 等人。ACM MM 2020\n32. Domain2Vec：用于无监督领域适应的领域嵌入。Peng, Xingchao 等人。ECCV 2020 [[代码]][14.32]\n33. 用于语义分割的无监督领域适应性知识蒸馏。Kothandaraman 等人。arXiv:2011.08007\n34. 面向元学习环境下的对话领域适应的学生—教师架构。Qian, Kun 等人。AAAI 2021\n35. 基于师生网络的多模态融合，用于室内动作识别。Bruce 等人。AAAI 2021\n36. 双教师++：利用可靠的知识迁移，在心脏分割中挖掘域内与域间知识。Li, Kang 等人。TMI 2021\n37. 用于高效多领域无监督适应的知识蒸馏方法。Nguyen 等人。IVC 2021\n38. 特征引导的动作模态迁移。Thoker, Fida Mohammad 和 Snoek, Cees。ICPR 2020\n39. 表象之外还有更多：通过蒸馏多模态知识实现自监督的多目标检测与跟踪。Francisco 等人。CVPR 2021\n40. 用于半监督迁移学习的自适应一致性正则化\nAbulikemu。Abulikemu 等人。CVPR 2021 [[代码]][14.40]\n41. 面向少量样本类别增量学习的语义感知知识蒸馏。Cheraghian 等人。CVPR 2021\n42. 在类别增量学习中蒸馏数据的因果效应。Hu, Xinting 等人。CVPR 2021 [[代码]][14.42]\n43. 基于双层域混合的半监督领域适应，用于语义分割。Chen, Shuaijun 等人。CVPR 2021\n44. PLOP：为持续语义分割而学，永不遗忘。Arthur 等人。CVPR 2021\n45. 通过稀疏且解耦的潜在表征之间的排斥—吸引机制实现持续语义分割。Umberto 和 Pietro。CVPR 2021\n46. 通过跨任务知识迁移指导场景结构，实现单深度超分辨率。Sun, Baoli 等人。CVPR 2021 [[代码]][14.46]\n47. CReST：面向不平衡半监督学习的类重平衡自训练框架。Wei, Chen 等人。CVPR 2021\n48. 领域适应的自适应增强：迈向场景分割中的稳健预测。Zheng, Zhedong 和 Yang, Yi。CVPR 2021\n49. 利用量子图像传感器在黑暗中进行图像分类。Gnanasambandam, Abhiram 和 Chan, Stanley H。ECCV 2020\n50. 利用量子图像传感器进行动态低光成像。Chi, Yiheng 等人。ECCV 2020\n51. 在领域迁移中可视化适应后的知识。Hou, Yunzhong 和 Zheng, Liang。CVPR 2021\n52. 基于中性交叉熵损失的无监督领域适应，用于语义分割。Xu, Hanqing 等人。IEEE TIP 2021\n53. 基于视觉和语言知识蒸馏的零样本检测。Gu, Xiuye 等人。arXiv:2104.13921\n54. 重新思考用于语义分割的无监督领域适应的集成—蒸馏方法。Chao, Chen-Hao 等人。CVPRW 2021\n55. 精神蒸馏：一种结合多领域知识迁移的模型压缩方法。Wu, Zhiyuan 等人。arXiv:2104.14696\n56. 基于傅里叶变换的领域泛化框架。Xu, Qinwei 等人。CVPR 2021\n57. KD3A：通过知识蒸馏实现的无监督多源去中心化领域适应。Feng, Haozhe 等人。ICML 2021\n\n\n## 知识蒸馏的应用\n\n1. 通过从神经元中蒸馏知识来压缩人脸模型。罗平等，AAAI 2016\n2. 利用知识蒸馏学习高效的物体检测模型。陈国斌等，NeurIPS 2017\n3. 学徒：使用知识蒸馏技术提升低精度网络的准确性。米什拉等，NeurIPS 2018\n4. 蒸馏行人重识别：迈向更可扩展的系统。吴安聪等，CVPR 2019\n5. 使用更少帧实现高效的视频分类。巴德瓦杰等，CVPR 2019\n6. 快速人体姿态估计。张峰等，CVPR 2019\n7. 从深度姿态回归网络中蒸馏知识。萨普特拉等，arXiv:1908.00858 (2019)\n8. 通过自注意力蒸馏学习轻量级车道检测CNN。侯元楠等，ICCV 2019\n9. 面向语义分割的结构化知识蒸馏。刘一凡等，CVPR 2019\n10. 用于视频目标检测的关系蒸馏网络。邓嘉俊等，ICCV 2019\n11. 教师指导学生如何从部分标注图像中学习以进行人脸关键点检测。董宣毅和杨毅，ICCV 2019\n12. 用于早期动作预测的渐进式师生学习。王雄辉等，CVPR 2019\n13. 基于信息多蒸馏网络的轻量级图像超分辨率。惠正等，ICCVW 2019\n14. AWSD：用于视频表示的自适应加权时空蒸馏。塔瓦科利安等，ICCV 2019\n15. 动态核蒸馏用于视频中的高效姿态估计。聂学成等，ICCV 2019\n16. 教师引导的架构搜索。巴希万和滕森，ICCV 2019\n17. 用于高效视频推理的在线模型蒸馏。穆拉普迪等，ICCV 2019\n18. 通过细粒度特征模仿蒸馏目标检测器。王涛等，CVPR 2019\n19. 用于视频目标检测的关系蒸馏网络。邓嘉俊等，ICCV 2019\n20. 用于语义分割增量学习的知识蒸馏。arXiv:1911.03462\n21. MOD：一种具有在线知识蒸馏的深度混合模型，用于大规模视频时序概念定位。arXiv:1910.12295\n22. 用于暹罗跟踪器的师生知识蒸馏。arXiv:1907.10586\n23. LaTeS：用于师生驾驶策略学习的潜在空间蒸馏。赵阿尔伯特等，CVPR 2020（预）\n24. 用于脑肿瘤分割的知识蒸馏。arXiv:2002.03688\n25. ROAD：面向现实的城市场景语义分割适应方法。陈宇华等，CVPR 2018\n26. 用于音频分类的多表示知识蒸馏。高亮等，arXiv:2002.09607\n27. 用于超分辨率通用风格迁移的协同蒸馏。王欢等，CVPR 2020 [[代码]][15.28]\n28. ShadowTutor：用于移动端视频DNN推理的分布式部分蒸馏。郑在源等，ICPP 2020 [[代码]][15.29]\n29. 带有教师推荐学习的目标关系图用于视频字幕生成。张子琪等，CVPR 2020\n30. 带有知识蒸馏的时空图用于视频字幕生成。CVPR 2020 [[代码]][15.31]\n31. 利用知识蒸馏实现压缩版深度6DoF目标检测。费利克斯等，arXiv:2003.13586\n32. 通过蒸馏语义实现从视频中全面理解场景。托西等，arXiv:2003.14030\n33. 并行WaveNet：快速高保真语音合成。范等，ICML 2018\n34. 从NRSfM中蒸馏知识以进行弱监督3D姿态学习。王朝阳等，ICCV 2019\n35. KD-MRI：一种用于MRI工作流中图像重建与修复的知识蒸馏框架。穆鲁格桑等，MIDL 2020\n36. 面向室内语义分割的几何感知蒸馏。焦建波等，CVPR 2019\n37. 教师指导学生如何从部分标注图像中学习以进行人脸关键点检测。ICCV 2019\n38. 通过异构任务模仿蒸馏图像去雾。洪明等，CVPR 2020\n39. 通过标签平滑进行动作预判的知识蒸馏。坎波雷塞等，arXiv:2004.07711\n40. 通过蒸馏图像-文本匹配模型实现更贴近实际的图像字幕生成。周远恩等，CVPR 2020\n41. 在多个实例检测网络中通过精炼过程蒸馏知识。泽尼和荣克，arXiv:2004.10943\n42. 实现边缘端目标检测的增量知识迁移。arXiv:2004.05746\n43. 无先验知识的学生：基于判别式潜在嵌入的师生异常检测。贝格曼等，CVPR 2020\n44. TA-学生VQA：通过自我提问进行多智能体训练。熊培熙和吴颖，CVPR 2020\n45. Mentornet：在标签损坏的情况下为超深神经网络学习数据驱动的课程。蒋璐等，ICML 2018\n46. 用于半监督阴影检测的多任务平均教师。陈志浩等，CVPR 2020 [[代码]][15.48]\n47. 通过知识蒸馏学习轻量级人脸检测器。张世峰等，IEEE 2019\n48. 通过层次化知识蒸馏学习轻量级行人检测器。ICIP 2019\n49. 通过任务自适应正则化蒸馏目标检测器。孙若雨等，arXiv:2006.13108\n50. 面向语义分割的类内紧凑性蒸馏。ECCV 2020\n51. DOPE：针对野外全身3D姿态估计的局部专家蒸馏。ECCV 2020\n52. 自相似学生用于部分标注病理切片图像的分割。ECCV 2020\n53. 多视角知识蒸馏实现稳健的重识别。波雷洛等，ECCV 2020 [[代码]][15.58]\n54. LabelEnc：一种用于目标检测的新中间监督方法。郝苗等，arXiv:2007.03282\n55. 光流蒸馏：迈向高效稳定的视频风格迁移。陈兴浩等，ECCV 2020\n56. 用于半监督3D动作识别的对抗性自监督学习。施晨阳等，ECCV 2020\n57. 双路径蒸馏：一种统一框架，用于改进黑盒攻击。张永刚等，ICML 2020\n58. 基于师生GAN模式的RGB-IR跨模态人员重识别。张子悦等，arXiv:2007.07452\n59. 通过深度蒸馏进行散焦模糊检测。存晓东和潘志文，ECCV 2020 [[代码]][15.64]\n60. 通过渐进式知识迁移提升弱监督目标检测。钟元义等，ECCV 2020 [[代码]][15.64]\n61. 权重衰减调度与知识蒸馏用于主动学习。ECCV 2020\n62. 通过知识蒸馏规避AutoAugment的异常值。ECCV 2020\n63. 通过分布蒸馏损失改善对困难样本的人脸识别。ECCV 2020\n64. 排他性-一致性正则化的知识蒸馏用于人脸识别。ECCV 2020\n65. 自相似学生用于部分标注病理切片图像的分割。程贤祖等，ECCV 2020\n66. 面向重叠宫颈细胞实例分割的深度半监督知识蒸馏。周燕宁等，arXiv:2007.10787 [[代码]][15.70]\n67. 基于两级残差蒸馏的三重网络用于增量目标检测。杨东宝等，arXiv:2007.13428\n68. 通过回归-检测双知识迁移迈向无监督人群计数。刘玉婷等，ACM MM 2020\n69. 面向图像字幕生成的教师关键训练策略。黄一清和陈建生，arXiv:2009.14405\n70. 带有教师推荐学习的目标关系图用于视频字幕生成。张子琪等，CVPR 2020\n71. 从多帧到单帧：面向3D目标检测的知识蒸馏。王岳等，ECCV 2020\n72. 用于轻量级图像超分辨率的残差特征蒸馏网络。刘洁等，ECCV 2020\n73. 保留句间相似性的知识蒸馏用于音频标签。Interspeech 2020\n74. 带有无噪声差分隐私的联邦模型蒸馏。arXiv:2009.05537\n75. 通过路由多样化的分布感知专家实现长尾识别。王旭东等，arXiv:2010.01809\n76. 通过时空知识蒸馏实现快速视频显著目标检测。易唐和袁力，arXiv:2010.10027\n77. 用于异常检测的多分辨率知识蒸馏。萨莱希等，CVPR 2021\n78. 面向语义分割的通道级蒸馏。舒昌勇等，arXiv:2011.13256\n79. 教我用混合监督进行分割：自信的学生终成大师。多尔兹等，arXiv:2012.08051\n80. 不变教师与等变学生用于无监督3D人体姿态估计。许晨欣等，AAAI 2021 [[代码]][15.80]\n81. 训练数据高效的图像变换器及通过注意力进行蒸馏。图弗龙等，arXiv:2012.12877 [[代码]][15.81]\n82. SID：通过选择性和相互关联的蒸馏实现无锚框目标检测的增量学习。彭灿等，arXiv:2012.15439\n83. PSSM-Distil：利用对比学习进行知识蒸馏，在低质量PSSM上预测蛋白质二级结构。王秦等，AAAI 2021\n84. 用于端到端人员搜索的多样化知识蒸馏。张鑫宇等，AAAI 2021\n85. 通过多模态到单模态的师生互学提升音频标签。尹怡芳等，AAAI 2021\n86. 神经注意力蒸馏：清除深度神经网络中的后门触发器。李一戈等，ICLR 2021 [[代码]][15.86]\n87. 用于半监督目标检测的无偏教师。刘延成等，ICLR 2021 [[代码]][15.87]\n88. 面向目标检测的定位蒸馏。郑兆辉等，CVPR 2021 [[代码]][15.88]\n89. 通过中间分类头蒸馏知识。阿里安和阿米拉利，arXiv:2103.00497\n90. 通过解耦特征蒸馏目标检测器。（华为-诺亚）。CVPR 2021\n91. 面向目标检测的一般实例蒸馏。戴星等，CVPR 2021\n92. 用于异常检测的多分辨率知识蒸馏。穆罕默德雷扎等，CVPR 2021\n93. 师生特征金字塔匹配用于无监督异常检测。王国栋等，arXiv:2103.04257\n94. 教师-探索者-学生学习：一种用于开放集识别的新学习方法。张在渊和金昌旭。IEEE 2021\n95. 密集关系蒸馏结合上下文感知聚合，用于少样本目标检测。胡汉哲等，CVPR 2021 [[代码]][15.95]\n96. 通过知识蒸馏压缩视觉-语言模型。方志远等，arXiv:2104.02096\n97. 再见互信息：用于跨模态人员重识别的变分蒸馏。田旭东等，CVPR 2021\n98. 通过对比知识蒸馏提升弱监督视觉接地能力。王立伟等，CVPR 2021\n99. 有序的双教师知识蒸馏用于轻量级人体姿态估计。赵仲秋等，arXiv:2104.10414\n100. 通过知识蒸馏提升轻量级深度估计。胡俊杰等，arXiv:2105.06143\n101. 弱监督密集视频字幕生成，联合运用知识蒸馏和跨模态匹配。吴博锋等，arXiv:2105.08252\n102. 重新审视目标检测中的知识蒸馏。巴尼塔莱比-德霍尔迪，arXiv:2105.10633\n103. 通过对比自蒸馏迈向紧凑的单幅图像超分辨率。王彦博等，IJCAI 2021\n104. 多少观测足够？用于轨迹预测的知识蒸馏。蒙蒂等，CVPR 2022\n\n### 用于自然语言处理与数据挖掘\n\n1. 针对BERT模型压缩的患者知识蒸馏。孙思琪等。arXiv:1908.09355\n2. TinyBERT：用于自然语言理解的BERT知识蒸馏模型。焦晓琪等。arXiv:1909.10351\n3. 基于知识蒸馏的视觉问答任务专精学习。NeurIPS 2018\n4. 用于双语词典构建的知识蒸馏。EMNLP 2017\n5. 一种面向可维护对话管理器的师生框架。EMNLP 2018\n6. 非自回归机器翻译中的知识蒸馏机制研究。arxiv 2019\n7. DistilBERT：BERT的精简版，更小、更快、更便宜、更轻量。Sanh, Victor等。arXiv:1910.01108\n8. 见多识广的学生学得更好：关于预训练紧凑模型的重要性。Turc, Iulia等。arXiv:1908.08962\n9. 复杂网络到响应预测的知识蒸馏研究。Arora, Siddhartha等。NAACL 2019\n10. 用于文本生成的BERT知识蒸馏模型。arXiv:1911.03829v1\n11. 非自回归机器翻译中的知识蒸馏机制理解。arXiv:1911.02727\n12. MobileBERT：适用于资源受限设备的紧凑型任务无关BERT模型。孙志清等。ACL 2020\n13. 从预训练模型中获取知识应用于神经机器翻译。Weng, Rongxiang等。AAAI 2020\n14. TwinBERT：通过知识蒸馏构建孪生结构BERT模型以实现高效检索。Lu, Wenhao等。KDD 2020\n15. 通过自集成和自蒸馏改进BERT微调。Xu, Yige等。arXiv:2002.10345\n16. FastBERT：具有自蒸馏功能且推理时间可适应的BERT模型。Liu, Weijie等。ACL 2020\n17. LadaBERT：通过混合模型压缩实现BERT的轻量化适配。Mao, Yihuan等。arXiv:2004.04124\n18. DynaBERT：宽度和深度可动态调整的BERT模型。Hou, Lu等。NeurIPS 2020\n19. 面向多语言序列标注的结构级知识蒸馏。Wang, Xinyu等。ACL 2020\n20. 蒸馏嵌入：利用知识蒸馏进行非线性嵌入分解。Lioutas, Vasileios等。arXiv:1910.06720\n21. TinyMBERT：用于大规模多语言命名实体识别的多阶段知识蒸馏框架。Mukherjee & Awadallah。ACL 2020\n22. 用于多语言无监督神经机器翻译的知识蒸馏。Sun, Haipeng等。arXiv:2004.10171\n23. 利用知识蒸馏将单语句子嵌入扩展为多语种。Reimers, Nils & Gurevych, Iryna。arXiv:2004.09813\n24. 为快速检索类聊天机器人蒸馏知识。Tahami等。arXiv:2004.11045\n25. 基于目标语言未标注数据的师生学习实现单\u002F多源跨语言命名实体识别。ACL 2020\n26. 使用均值教师进行半监督学习的局部聚类。arXiv:2004.09665\n27. 基于判别式教师的时间扭曲技术增强神经网络时序数据。arXiv:2004.08780\n28. 双向编码器的句法结构蒸馏预训练。arXiv:2005.13482\n29. 蒸馏、适配、再蒸馏：针对神经机器翻译的小规模领域内模型训练。arXiv:2003.02877\n30. 为更快速、更绿色的依存句法分析蒸馏神经网络。arXiv:2006.00844\n31. 基于信息丰富的软标签蒸馏知识用于神经关系抽取。AAAI 2020 [[代码]][16.32]\n32. 通过蒸馏图像-文本匹配模型实现更贴近实际的图像字幕生成。Zhou, Yuanen等。CVPR 2020\n33. 利用知识蒸馏在模态不完整的情况下进行多模态学习。Wang, Qi等。KDD 2020\n34. 将BERT知识蒸馏应用于序列到序列的自动语音识别。Futami, Hayato等。arXiv:2008.03822\n35. 针对语言模型压缩的中间表示对比蒸馏。Sun, Siqi等。EMNLP 2020 [[代码]][16.37]\n36. 用于文本摘要的噪声自知识蒸馏。arXiv:2009.07032\n37. 简化版TinyBERT：用于文档检索的知识蒸馏模型。arXiv:2009.07531\n38. 通过模仿学习实现自回归知识蒸馏。arXiv:2009.07253\n39. BERT-EMD：基于地球移动距离的多对多层映射用于BERT压缩。EMNLP 2020 [[代码]][16.392]\n40. 可解释嵌入过程中的知识迁移。Seunghyun Lee等。AAAI 2021 [[代码]][16.40]\n41. LRC-BERT：用于自然语言理解的潜在表征对比知识蒸馏模型。Fu, Hao等。AAAI 2021\n42. 向零样本知识蒸馏迈进：用于自然语言处理。Ahmad等。arXiv:2012.15495\n43. Meta-KD：跨领域的语言模型压缩元知识蒸馏框架。Pan, Haojie等。AAAI 2021\n44. 学习如何扩充数据以进行数据稀缺领域的BERT知识蒸馏。Feng, Lingyun等。AAAI 2021\n45. 通过标签混淆学习提升文本分类模型性能。Guo, Biyang等。AAAI 2021\n46. NewsBERT：为智能新闻应用蒸馏预训练语言模型。Wu, Chuhan等。kdd 2021\n\n### 针对推荐系统\n\n1. 基于策略蒸馏强化学习的长期奖励多任务推荐。Liu, Xi 等。arXiv:2001.09595\n2. 一种基于统一数据的反事实推荐通用知识蒸馏框架。Liu, Dugang 等。SIGIR 2020 [[幻灯片]][16.35] [[代码]][16.352]\n3. LightRec：一种内存与搜索效率俱佳的推荐系统。Lian, Defu 等。WWW 2020\n4. 淘宝推荐中的特权特征蒸馏。Xu, Chen 等。KDD 2020\n5. 资源受限移动设备上的下一个兴趣点推荐。WWW 2020\n6. 利用外部知识实现高效推荐的对抗性蒸馏。Chen, Xu 等。ACM Trans, 2018\n7. 排序蒸馏：为推荐系统学习高性能的紧凑排序模型。Tang, Jiaxi 等。SIGKDD 2018\n8. 一种新颖的增强型协同自编码器，结合知识蒸馏用于 Top-N 推荐系统。Pan, Yiteng 等。Neurocomputing 2019 [[代码]][16.38]\n9. ADER：面向会话式推荐持续学习的自适应蒸馏示例回放。Mi, Fei 等。ACM RecSys 2020\n10. 基于知识蒸馏的 CTR 预测集成。Zhu, Jieming 等（华为）。CIKM 2020\n11. DE-RRD：推荐系统的知识蒸馏框架。Kang, Seongku 等。CIKM 2020 [[代码]][16.39]\n12. 带有注意力机制的知识蒸馏神经兼容性建模。Song, Xuemeng 等。SIGIR 2018\n13. 结合图卷积网络蒸馏的二值化协同过滤。Wang, Haoyu 等。IJCAI 2019\n14. 用于 Top-N 推荐的协同蒸馏。Jae-woong Lee 等。CIKM 2019\n15. 将结构化知识蒸馏到嵌入中，以实现可解释且准确的推荐。Zhang Yuan 等。WSDM 2020\n16. UMEC：统一模型与嵌入压缩，用于高效推荐系统。ICLR 2021\n17. 用于 Top-K 推荐系统的双向蒸馏。WWW 2021\n18. 冷启动推荐中的特权图蒸馏。SIGIR 2021\n19. 推荐系统的拓扑蒸馏 [KDD 2021]\n20. 用于推荐中知识图谱蒸馏的条件注意力网络 [CIKM 2021]\n21. 探索、过滤与蒸馏：推荐中的蒸馏强化学习 [CIKM 2021] [[视频]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3459637.3481917) [[代码]](https:\u002F\u002Fgithub.com\u002Fmodriczhang\u002FDRL-Rec)\n22. 图结构感知的对比式知识蒸馏，用于推荐系统的增量学习 [CIKM 2021]\n23. 用于推荐中知识图谱蒸馏与精炼的条件图注意力网络 [CIKM 2021]\n24. 多兴趣推荐中的目标兴趣蒸馏 [CIKM 2022] [[视频]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3511808.3557464) [[代码]](https:\u002F\u002Fgithub.com\u002FTHUwangcy\u002FReChorus\u002Ftree\u002FCIKM22)\n25. KDCRec：基于均匀数据的反事实推荐知识蒸馏 [TKDE 2022] [[代码]](https:\u002F\u002Fgithub.com\u002Fdgliu\u002FTKDE_KDCRec)\n26. 重访基于图的社会推荐：一种蒸馏增强的社会图网络 [WWW 2022] [[代码]](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002Fuqmsr67wqurpnre\u002FSupplementary%20Material.zip?dl=0)\n27. 用于个性化穿搭推荐的假负样本蒸馏与对比学习 [Arxiv 2110.06483]\n28. 用于 Top-N 推荐系统排序蒸馏的双重修正策略 [ArXiv 2109.03459v1]\n29. 基于可微架构搜索的场景自适应知识蒸馏，用于序列推荐。Chen, Lei 等。[ArXiv 2107.07173v1]\n30. 插值蒸馏用于统一有偏与无偏推荐 [SIGIR 2022] [[视频]](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3477495.3532002) [[代码]](https:\u002F\u002Fgithub.com\u002FDingseewhole\u002FInterD_master)\n31. FedSPLIT：基于非负联合矩阵分解与知识蒸馏的一次性联邦推荐系统 [Arxiv 2205.02359v1]\n32. 基于自监督知识蒸馏的端侧下一件商品推荐 [SIGIR 2022] [[代码]](https:\u002F\u002Fgithub.com\u002Fxiaxin1998\u002FOD-Rec)\n33. 多任务推荐中的跨任务知识蒸馏 [AAAI 2022]\n34. 朝着理解排序学习中的特权特征蒸馏迈进 [NIPS 2022]\n35. 打破黑箱：基于知识蒸馏的公平排序框架 [WISE 2022]\n36. Distill-VQ：通过从密集嵌入中蒸馏知识来学习检索导向的向量量化 [SIGIR 2022] [[代码]](https:\u002F\u002Fgithub.com\u002Fstaoxiao\u002Flibvq)\n37. AutoFAS：预排序系统的自动特征与架构选择 [KDD 2022]\n38. 用于大规模 CTR 预测的增量学习框架 [RecSys 22]\n39. 基于知识蒸馏的有向无环图因子机，用于 CTR 预测 [WSDM 2023] [[代码]](https:\u002F\u002Fgithub.com\u002Frucaibox\u002Fdagfm)\n40. 用于推荐的无偏知识蒸馏 [WSDM 2023] [[代码]](https:\u002F\u002Fgithub.com\u002Fchengang95\u002FUnKD)\n41. DistilledCTR：通过模型蒸馏实现的精准且可扩展的 CTR 预测模型 [ESWA 2022]\n43. 基于深度强化学习的顶部感知推荐蒸馏 [Information Sciences 2021]\n\n## 模型剪枝或量化\n\n1. 利用主导卷积核和知识预回归加速卷积神经网络。ECCV 2016\n2. N2N学习：通过策略梯度强化学习实现网络到网络的压缩。Ashok、Anubhav等。ICLR 2018\n3. 可裁剪神经网络。Yu、Jiahui等。ICLR 2018\n4. 用于无配对图像翻译的协同进化压缩。Shu、Han等。ICCV 2019\n5. 元剪枝：基于元学习的自动神经网络通道剪枝。Liu、Zechun等。ICCV 2019\n6. LightPAFF：一种用于预训练和微调的两阶段蒸馏框架。ICLR 2020\n7. 带提示的剪枝：一种高效的模型加速框架。ICLR 2020\n8. 使用廉价卷积和在线蒸馏训练卷积神经网络。arXiv:1909.13063\n9. 跨领域深度神经网络压缩中的协作式剪枝。[Chen, Shangyu][17.9]等。IJCAI 2019\n10. QKD：感知量化的知识蒸馏。Kim、Jangho等。arXiv:1911.12491v1\n11. 基于残差连接和有限数据的神经网络剪枝。Luo、Jian-Hao & Wu、Jianxin。CVPR 2020\n12. 使用全精度辅助模块训练量化神经网络。Zhuang、Bohan等。CVPR 2020\n13. 向有效的低比特卷积神经网络迈进。Zhuang、Bohan等。CVPR 2018\n14. 使用低比特权重和激活的有效卷积神经网络训练。Zhuang、Bohan等。arXiv:1908.04680\n15. 更加关注迭代剪枝的快照：通过集成蒸馏改进模型压缩。Le等。arXiv:2006.11487 [[代码]][17.15]\n16. 知识蒸馏超越模型压缩。Choi、Arthur等。arxiv:2007.01493\n17. 针对二值卷积神经网络的蒸馏引导残差学习。Ye、Jianming等。ECCV 2020\n18. 使用层次自蒸馏的级联通道剪枝。Miles & Mikolajczyk。BMVC 2020\n19. 三值BERT：感知蒸馏的超低比特BERT。Zhang、Wei等。EMNLP 2020\n20. 权重蒸馏：在神经网络参数中传递知识。arXiv:2009.09152\n21. 随机精度集成：量化深度神经网络的自我知识蒸馏。Boo、Yoonho等。AAAI 2021\n22. 二值图神经网络。Bahri、Mehdi等。CVPR 2021\n23. 自我损害对比学习。Jiang、Ziyu等。ICML 2021\n24. 信息论表示蒸馏。Miles等。BMVC 2022 [[代码]][19.10]\n25. 针对二值卷积神经网络的蒸馏引导残差学习。Ye、Jianming等。ECCV 2020\n26. 使用层次自蒸馏的级联通道剪枝。Miles & Mikolajczyk。BMVC 2020\n27. 三值BERT：感知蒸馏的超低比特BERT。Zhang、Wei等。EMNLP 2020\n28. 权重蒸馏：在神经网络参数中传递知识。arXiv:2009.09152\n29. 随机精度集成：量化深度神经网络的自我知识蒸馏。Boo、Yoonho等。AAAI 2021\n30. 二值图神经网络。Bahri、Mehdi等。CVPR 2021\n31. 自我损害对比学习。Jiang、Ziyu等。ICML 2021\n\n## 超越\n\n1. 深度网络真的需要那么深吗？Ba、Jimmy，以及Rich Caruana。NeurIPS 2014\n2. 标签平滑何时会有帮助？Müller、Rafael，Kornblith，以及Hinton。NeurIPS 2019\n3. 向理解知识蒸馏迈进。Phuong、Mary，以及Lampert、Christoph。ICML 2019\n4. 用逻辑规则驾驭深度神经网络。ACL 2016\n5. 标签的适应性正则化。Ding、Qianggang等。arXiv:1908.05474\n6. 神经网络之间的知识同构。Liang、Ruofan等。arXiv:1908.01581\n7. （综述）深度神经网络中用于知识蒸馏的师生技术建模。arXiv:1912.13179\n8. 理解并改进知识蒸馏。Tang、Jiaxi等。arXiv:2002.03532\n9. 分类任务中知识蒸馏的现状。Ruffy、Fabian，以及Chahal、Karanbir。arXiv:1912.10850 [[代码]][18.11]\n10. 通过量化知识来解释知识蒸馏。[Zhang、Quanshi][18.13]等。CVPR 2020\n11. DeepVID：通过知识蒸馏实现图像分类器的深度视觉解释与诊断。IEEE Trans，2019年。\n12. 论知识蒸馏的不合理有效性：核区域分析。Rahbar、Arman等。arXiv:2003.13438\n13. （综述）知识蒸馏与师生学习在视觉智能中的应用：回顾与新展望。Wang、Lin & Yoon、Kuk-Jin。arXiv:2004.05937\n14. 为什么蒸馏有帮助：统计学视角。arXiv:2005.10419\n15. 通过知识蒸馏转移归纳偏置。Abnar、Samira等。arXiv:2006.00555\n16. 标签平滑能否缓解标签噪声？Lukasik、Michal等。ICML 2020\n17. 数据增强对知识蒸馏影响的实证分析。Das、Deepan等。arXiv:2006.03810\n18. （综述）知识蒸馏：一项综述。Gou、Jianping等。IJCV 2021\n19. 对抗性迁移是否意味着知识迁移？Liang、Kaizhao等。arXiv:2006.14512\n20. 关于知识蒸馏的揭秘：残差网络视角。Jha等。arXiv:2006.16589\n21. 利用简单模型已有的知识来提升其性能。Dhurandhar等。ICML 2020\n22. 用于神经逻辑规则学习的特征提取函数。Gupta & Robles-Kelly。arXiv:2008.06326\n23. 从集成视角看知识蒸馏与其他技术的正交性。SeongUk等。arXiv:2009.04120\n24. 宽度神经网络中的知识蒸馏：风险边界、数据效率与不完美的教师。Ji、Guangda & Zhu、Zhanxing。NeurIPS 2020\n25. 为知识蒸馏中的特征模仿辩护。Wang、Guo-Hua等。arXiv:2011.0142\n26. 通过知识蒸馏继承正则化的可解模型。Luca Saglietti & Lenka Zdeborova。arXiv:2012.00194\n27. 不可蒸馏：制造一个无法教导学生的恶劣教师。ICLR 2021\n28. 向理解深度学习中的集成、知识蒸馏和自我蒸馏迈进。Allen-Zhu、Zeyuan & Li、Yuanzhi。（微软）arXiv:2012.09816\n29. 从清洁输入到噪声输入的师生学习。Hong、Guanzhe等。CVPR 2021\n30. 标签平滑是否真的与知识蒸馏不兼容：一项实证研究。ICLR 2021 [[项目]][18.32]\n31. 用于收益优化的模型蒸馏：可解释的个性化定价。Biggs、Max等。ICML 2021\n32. 蒸馏的统计学视角。Aditya等（谷歌）。ICML 2021\n33. （综述）无数据知识转移：一项综述。Liu、Yuang等。arXiv:2112.15278\n34. 知识蒸馏超越模型压缩。Choi、Sarfraz等。arxiv:2007.01493\n\n## 蒸馏工具\n\n1. [Neural Network Distiller][18.8]：用于深度神经网络压缩研究的 Python 包。arXiv:1910.12232\n2. [TextBrewer][18.12]：面向自然语言处理的开源知识蒸馏工具包。哈尔滨工业大学与科大讯飞。arXiv:2002.12620\n3. [torchdistill][18.28]：一个模块化、基于配置驱动的知识蒸馏框架。\n4. [KD-Lib][18.29]：一个用于知识蒸馏、剪枝和量化操作的 PyTorch 库。Shen, Het 等人。arXiv:2011.14691\n5. [Knowledge-Distillation-Zoo][18.30]\n6. [RepDistiller][18.31]\n7. [classification distiller][18.11]\n\n---\n注：所有论文的 PDF 文件均可在 [arXiv](https:\u002F\u002Farxiv.org\u002Fsearch\u002F)、[Bing](https:\u002F\u002Fwww.bing.com) 或 [Google](https:\u002F\u002Fwww.google.com) 上找到并下载。\n\n来源：\u003Chttps:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation>\n\n感谢所有贡献者：\n\n[![yuang](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_87ff5905ebb5.png)](https:\u002F\u002Fgithub.com\u002FFLHonker)  [![lioutasb](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_a4785b56a6ab.png)](https:\u002F\u002Fgithub.com\u002Flioutasb)  [![KaiyuYue](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_cecdd7dd10b6.png)](https:\u002F\u002Fgithub.com\u002FKaiyuYue)  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_5cfdea6fc64f.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Fshivmgg)  [![cardwing](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_dd5a3e48487b.png)](https:\u002F\u002Fgithub.com\u002Fcardwing)  [![jaywonchung](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_9118b67499c7.png)](https:\u002F\u002Fgithub.com\u002Fjaywonchung)  [![ZainZhao](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_ef7215262927.png)](https:\u002F\u002Fgithub.com\u002FZainZhao)  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_10e51735c5a1.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Fforjiuzhou) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_2a9add4561aa.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Ffmthoker) [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_74239c291260.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002Fcardwing)  [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_readme_12c5737c49a2.png\" width = \"28\" height = \"28\" alt=\"avatar\" \u002F>](https:\u002F\u002Fgithub.com\u002FPyJulie)  \n\n\n联系方式：刘源（frankliu624![](https:\u002F\u002Fres.cloudinary.com\u002Fflhonker\u002Fimage\u002Fupload\u002Fv1605363963\u002Ffrankio\u002Fat1.png)outlook.com）\n\n[1.10]: https:\u002F\u002Fgithub.com\u002Fyuanli2333\u002FTeacher-free-Knowledge-Distillation\n[1.26]: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fmeta_pseudo_labels\n[1.30]: https:\u002F\u002Fgithub.com\u002Fdwang181\u002Factive-mixup\n[1.36]: https:\u002F\u002Fgithub.com\u002Fbigaidream-projects\u002Frole-kd\n[1.41]: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Fieg\n[1.42]: https:\u002F\u002Fgithub.com\u002Fxuguodong03\u002FSSKD\n[1.43]: https:\u002F\u002Fgithub.com\u002Fbrjathu\u002FSKD\n[1.46]: https:\u002F\u002Fgithub.com\u002FDSLLcode\u002FDSLL\n[1.59]: https:\u002F\u002Fgithub.com\u002Fxuguodong03\u002FUNIXKD\n[1.62]: https:\u002F\u002Fgithub.com\u002Fjhoon-oh\u002Fkd_data\n[1.65]: https:\u002F\u002Fgithub.com\u002Fwinycg\u002FHSAKD\n[2.15]: https:\u002F\u002Fgithub.com\u002Fclovaai\u002Foverhaul-distillation\n[2.27]: https:\u002F\u002Fgithub.com\u002FJetRunner\u002FBERT-of-Theseus\n[2.30]: https:\u002F\u002Fgithub.com\u002Fzhouzaida\u002Fchannel-distillation\n[2.31]: https:\u002F\u002Fgithub.com\u002FKaiyuYue\u002Fmgd\n[2.34]: https:\u002F\u002Fgithub.com\u002Faztc\u002FFNKD\n[2.44]: https:\u002F\u002Fgithub.com\u002FDefangChen\u002FSemCKD\n[2.45]: https:\u002F\u002Fgithub.com\u002FBUPT-GAMMA\u002FCPF\n[2.46]: https:\u002F\u002Fgithub.com\u002Fihollywhy\u002FDistillGCN.PyTorch\n[2.47]: https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fattention-feature-distillation\n[4.4]: https:\u002F\u002Fgithub.com\u002FHobbitLong\u002FRepDistiller\n[4.9]: https:\u002F\u002Fgithub.com\u002Ftaoyang1122\u002FMutualNet\n[4.15]: https:\u002F\u002Fgithub.com\u002Fwinycg\u002FMCL\n[5.11]: https:\u002F\u002Fgithub.com\u002Falinlab\u002Fcs-kd\n[5.13]: https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FSelf-PU\n[5.19]: https:\u002F\u002Fgithub.com\u002FMingiJi\u002FFRSKD\n[5.20]: https:\u002F\u002Fgithub.com\u002FVegeta2020\u002FSE-SSD\n[6.6]: https:\u002F\u002Fgithub.com\u002Fcardwing\u002FCodes-for-IntRA-KD\n[6.7]: https:\u002F\u002Fgithub.com\u002Fpassalis\u002Fpkth\n[8.20]: https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fgan-compression\n[8.23]: https:\u002F\u002Fgithub.com\u002FEvgenyKashin\u002Fstylegan2-distillation\n[8.25]: https:\u002F\u002Fgithub.com\u002Fterarachang\u002FACCV_TinyGAN\n[8.26]: https:\u002F\u002Fgithub.com\u002FSJLeo\u002FDMAD\n[8.28]: https:\u002F\u002Fgithub.com\u002Fmshahbazi72\u002FcGANTransfer\n[8.29]: https:\u002F\u002Fgithub.com\u002Fsnap-research\u002FCAT\n[10.6]: https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDeepInversion\n[10.9]: https:\u002F\u002Fgithub.com\u002Fsnudatalab\u002FKegNet\n[10.12]: https:\u002F\u002Fgithub.com\u002Fxushoukai\u002FGDFQ\n[10.16]: https:\u002F\u002Fgithub.com\u002Fleaderj1001\u002FBillion-scale-semi-supervised-learning\n[10.28]: https:\u002F\u002Fgithub.com\u002FFLHonker\u002FZAQ-code\n[10.35]: https:\u002F\u002Fgithub.com\u002Famirgholami\u002FZeroQ\n[10.30]: https:\u002F\u002Fgithub.com\u002Fcake-lab\u002Fdatafree-model-extraction\n[10.36]: https:\u002F\u002Fgithub.com\u002Fzju-vipa\u002FDataFree\n[11.8]: https:\u002F\u002Fgithub.com\u002FTAMU-VITA\u002FAGD\n[12.2]: https:\u002F\u002Fgithub.com\u002Fhankook\u002FSLA\n[12.5]: https:\u002F\u002Fgithub.com\u002Fsthalles\u002FPyTorch-BYOL\n[12.6]: https:\u002F\u002Fgithub.com\u002FXHWXD\u002FDBSN\n[12.9]: https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FAdversarial-Contrastive-Learning\n[12.10]: https:\u002F\u002Fgithub.com\u002FUMBCvision\u002FCompRess\n[12.11]: https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fsimclr\n[12.12]: https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Ftree\u002Fmaster\u002Fmodels\u002Fofficial\u002Fdetection\u002Fprojects\u002Fself_training\n[12.13]: https:\u002F\u002Fgithub.com\u002FUMBCvision\u002FISD\n[12.25]: http:\u002F\u002Fgithub.com\u002FSYangDong\u002Ftse-t\n[12.29]: https:\u002F\u002Fweihonglee.github.io\u002FProjects\u002FKD-MTL\u002FKD-MTL.htm\n[12.30]: https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAMTML-KD-code\n[12.37]: https:\u002F\u002Fgithub.com\u002FAnTuo1998\u002FAE-KD\n[12.41]: https:\u002F\u002Fgithub.com\u002Fahmdtaha\u002Fknowledge_evolution\n[13.24]: https:\u002F\u002Fgithub.com\u002Fzju-vipa\u002FKamalEngine\n[14.27]: https:\u002F\u002Fgithub.com\u002Fnjulus\u002FReFilled\n[14.29]: https:\u002F\u002Fgithub.com\u002Fbradyz\u002Ftask-distillation\n[14.30]: https:\u002F\u002Fgithub.com\u002Fwanglixilinx\u002FDSRL\n[14.32]: https:\u002F\u002Fgithub.com\u002FVisionLearningGroup\u002FDomain2Vec\n[14.40]: https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FSemi-Supervised-Transfer-Learning\n[14.42]: https:\u002F\u002Fgithub.com\u002FJoyHuYY1412\u002FDDE_CIL\n[14.46]:https:\u002F\u002Fgithub.com\u002FSunbaoli\u002Fdsr-distillation\n[15.5]: https:\u002F\u002Fgithub.com\u002Flucidrains\u002Fbyol-pytorch\n[15.28]: https:\u002F\u002Fgithub.com\u002Fmingsun-tse\u002Fcollaborative-distillation\n[15.29]: https:\u002F\u002Fgithub.com\u002Fjaywonchung\u002FShadowTutor\n[15.31]: https:\u002F\u002Fgithub.com\u002FStanfordVL\u002FSTGraph\n[15.48]: https:\u002F\u002Fgithub.com\u002FeraserNut\u002FMTMT\n[15.58]: https:\u002F\u002Fgithub.com\u002Faimagelab\u002FVKD\n[15.64]: https:\u002F\u002Fgithub.com\u002Fvinthony\u002Fdepth-distillation\n[15.64]: https:\u002F\u002Fgithub.com\u002Fmikuhatsune\u002Fwsod_transfer\n[15.70]: https:\u002F\u002Fgithub.com\u002FSIAAAAAA\u002FMMT-PSM\n[15.80]: https:\u002F\u002Fgithub.com\u002Fsjtuxcx\u002FITES\n[15.81]: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeit\n[15.86]: https:\u002F\u002Fgithub.com\u002Fbboylyg\u002FNAD\n[15.87]: https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Funbiased-teacher\n[15.88]: https:\u002F\u002Fgithub.com\u002FHikariTJU\u002FLD\n[15.95]: https:\u002F\u002Fgithub.com\u002Fhzhupku\u002FDCNet\n[16.32]: https:\u002F\u002Fgithub.com\u002Fzzysay\u002FKD4NRE\n[16.35]: http:\u002F\u002Fcsse.szu.edu.cn\u002Fstaff\u002Fpanwk\u002Fpublications\u002FConference-SIGIR-20-KDCRec-Slides.pdf\n[16.352]:https:\u002F\u002Fgithub.com\u002Fdgliu\u002FSIGIR20_KDCRec\n[16.37]: https:\u002F\u002Fgithub.com\u002Fintersun\u002FCoDIR\n[16.38]: https:\u002F\u002Fgithub.com\u002Fgraytowne\u002Frank_distill\n[16.39]: https:\u002F\u002Fgithub.com\u002FSeongKu-Kang\u002FDE-RRD_CIKM20\n[16.392]:https:\u002F\u002Fgithub.com\u002Flxk00\u002FBERT-EMD\n[16.40]: https:\u002F\u002Fgithub.com\u002Fsseung0703\u002FIEPKT\n[17.9]: https:\u002F\u002Fcsyhhu.github.io\u002F\n[17.15]: https:\u002F\u002Fgithub.com\u002Flehduong\u002Fginp\n[18.8]: https:\u002F\u002Fgithub.com\u002FIntelLabs\u002Fdistiller\n[18.11]: https:\u002F\u002Fgithub.com\u002Fkaranchahal\u002Fdistiller\n[18.12]: https:\u002F\u002Fgithub.com\u002Fairaria\u002FTextBrewer\n[18.13]: http:\u002F\u002Fqszhang.com\u002F\n[18.28]: https:\u002F\u002Fgithub.com\u002Fyoshitomo-matsubara\u002Ftorchdistill\n[18.29]: https:\u002F\u002Fgithub.com\u002FSforAiDl\u002FKD_Lib\n[18.30]: https:\u002F\u002Fgithub.com\u002FAberHu\u002FKnowledge-Distillation-Zoo\n[18.31]: https:\u002F\u002Fgithub.com\u002FHobbitLong\u002FRepDistiller\n[18.32]: http:\u002F\u002Fzhiqiangshen.com\u002Fprojects\u002FLS_and_KD\u002Findex.html  \n[2.55]: https:\u002F\u002Fgithub.com\u002FArchipLab-LinfengZhang\u002FTask-Oriented-Feature-Distillation\n[19.10]: https:\u002F\u002Fgithub.com\u002Froymiles\u002FITRD","# Awesome-Knowledge-Distillation 快速上手指南\n\n**Awesome-Knowledge-Distillation** 并非一个单一的可安装软件包，而是一个精选的知识蒸馏（Knowledge Distillation, KD）领域开源论文、代码实现及工具的资源列表。本指南旨在帮助开发者快速利用该列表中的资源，搭建环境并运行经典的蒸馏示例。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下基本要求。由于列表中包含大量基于 PyTorch 和 TensorFlow 的项目，建议以 PyTorch 生态为主进行配置。\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **硬件要求**: 支持 CUDA 的 NVIDIA GPU（推荐显存 ≥ 8GB），用于加速模型训练与蒸馏过程。\n*   **前置依赖**:\n    *   Python 3.7+\n    *   Git\n    *   CUDA Toolkit (版本需与深度学习框架匹配)\n    *   cuDNN\n\n## 安装步骤\n\n由于该项目是资源索引，您需要根据需求克隆具体的子项目或使用通用的蒸馏库。以下以克隆主仓库并安装通用依赖为例。\n\n### 1. 克隆仓库\n使用国内镜像源加速克隆过程：\n\n```bash\ngit clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002FAwesome-Knowledge-Distillation.git\n# 或者直接从 GitHub 克隆（若网络通畅）\n# git clone https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation.git\ncd Awesome-Knowledge-Distillation\n```\n\n### 2. 创建虚拟环境\n推荐使用 `conda` 管理环境：\n\n```bash\nconda create -n kd_env python=3.8\nconda activate kd_env\n```\n\n### 3. 安装深度学习框架\n推荐使用清华源或阿里源加速安装 PyTorch：\n\n```bash\n# 示例：安装 PyTorch (CUDA 11.8 版本)，请根据实际显卡驱动调整\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 4. 安装通用工具库\n许多列表中的项目依赖常见的科学计算库：\n\n```bash\npip install numpy scipy matplotlib tqdm scikit-learn -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：若要运行列表中特定的论文代码（如 `FitNets`, `CRD`, `DKD` 等），请进入对应论文的官方代码仓库，并按照其单独的 `requirements.txt` 进行安装。\n\n## 基本使用\n\n知识蒸馏的核心流程通常包含三个步骤：**训练教师模型 (Teacher)** -> **提取知识 (Logits\u002FFeatures)** -> **训练学生模型 (Student)**。\n\n以下是一个基于 PyTorch 的最简伪代码示例，演示了最经典的 **Logits 蒸馏**（参考 Hinton et al. 2015）的基本逻辑：\n\n### 1. 定义蒸馏损失函数\n结合硬标签（Ground Truth）和软标签（Teacher Logits）计算损失。\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass DistillationLoss(nn.Module):\n    def __init__(self, temperature=3.0, alpha=0.7):\n        super().__init__()\n        self.T = temperature\n        self.alpha = alpha\n        self.kl_div = nn.KLDivLoss(reduction='batchmean')\n        self.ce_loss = nn.CrossEntropyLoss()\n\n    def forward(self, student_logits, teacher_logits, targets):\n        # 计算软目标损失 (KL 散度)\n        soft_loss = self.kl_div(\n            F.log_softmax(student_logits \u002F self.T, dim=1),\n            F.softmax(teacher_logits \u002F self.T, dim=1)\n        ) * (self.T ** 2)\n        \n        # 计算硬目标损失 (交叉熵)\n        hard_loss = self.ce_loss(student_logits, targets)\n        \n        # 加权总和\n        return self.alpha * soft_loss + (1.0 - self.alpha) * hard_loss\n```\n\n### 2. 训练循环示例\n假设已加载预训练好的 `teacher_model` 和初始化的 `student_model`。\n\n```python\n# 初始化\ncriterion = DistillationLoss(temperature=4.0, alpha=0.5)\noptimizer = torch.optim.SGD(student_model.parameters(), lr=0.1, momentum=0.9)\n\n# 设置教师模型为评估模式，冻结参数\nteacher_model.eval()\nfor param in teacher_model.parameters():\n    param.requires_grad = False\n\n# 训练迭代\nfor images, labels in dataloader:\n    images, labels = images.cuda(), labels.cuda()\n    \n    optimizer.zero_grad()\n    \n    # 前向传播\n    with torch.no_grad():\n        teacher_logits = teacher_model(images)\n    \n    student_logits = student_model(images)\n    \n    # 计算蒸馏损失\n    loss = criterion(student_logits, teacher_logits, labels)\n    \n    # 反向传播\n    loss.backward()\n    optimizer.step()\n```\n\n### 3. 探索更多实现\n浏览本仓库目录，您可以找到针对不同场景的实现方案：\n*   **中间层特征蒸馏**: 查看 `Knowledge from intermediate layers` 章节（如 FitNets, AT）。\n*   **无数据蒸馏**: 查看 `Data-free KD` 章节。\n*   **特定任务**: 查看 `Application of KD` 章节（如 NLP, 推荐系统）。\n\n直接访问列表中提供的 `[code]` 链接，即可获取对应论文的详细复现代码。","某自动驾驶初创公司的算法团队正致力于将高精度的感知模型部署到算力受限的车载边缘芯片上，急需在保持准确率的同时大幅压缩模型体积。\n\n### 没有 Awesome-Knowledge-Distillation 时\n- **文献调研如大海捞针**：团队成员需手动在 arXiv 和各大会议中搜索“知识蒸馏”相关论文，耗时数周仍难以覆盖 2014-2021 年间的关键成果，极易遗漏如\"Teacher Assistant\"或\"Self-KD\"等进阶方案。\n- **技术选型盲目试错**：面对 Logits 蒸馏、中间层特征对齐、图结构蒸馏等多种技术路线，缺乏系统分类指引，导致团队错误选择了不适配当前检测任务的蒸馏策略，浪费大量算力资源。\n- **跨领域应用受阻**：当尝试将蒸馏技术迁移至雷达点云处理或小样本场景时，因找不到\"Cross-modal\"或\"Data-free KD\"等细分领域的专项论文，项目陷入停滞。\n- **复现成本高昂**：由于缺乏对各类变体（如结合 GAN、元学习或自动化搜索）的整理，开发人员需从零阅读大量冗长原文才能理解核心差异，严重拖慢迭代进度。\n\n### 使用 Awesome-Knowledge-Distillation 后\n- **一站式全景索引**：团队直接利用其整理的 658+ 篇论文清单，按\"Logits\"、“中间层”、“自蒸馏”等维度快速定位到最适合车载场景的\"Relational Knowledge Distillation\"方案，调研时间缩短 80%。\n- **精准匹配技术路径**：借助清晰的分类结构，迅速排除了不适用的纯日志蒸馏，锁定了能更好保留空间结构信息的图基于（Graph-based）蒸馏方法，显著提升了小模型对行人检测的精度。\n- **激发创新组合思路**：通过浏览\"KD + AutoML\"和\"Multi-teacher\"板块，团队受启发设计了多教师集成蒸馏架构，成功解决了单一教师模型在极端天气下泛化能力不足的问题。\n- **高效落地验证**：参考列表中提供的代码链接和经典复现路径，团队在一周内完成了基线搭建与对比实验，加速了模型从实验室到实车部署的闭环。\n\nAwesome-Knowledge-Distillation 通过将碎片化的学术成果系统化，让工程师能从繁重的文献挖掘中解放出来，专注于解决实际的模型压缩与性能平衡难题。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFLHonker_Awesome-Knowledge-Distillation_53f998a1.png","FLHonker","Frank","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FFLHonker_a70a27c9.png","People have Dreams, without bells and whistles.",null,"https:\u002F\u002Fgithub.com\u002FFLHonker",2655,333,"2026-04-03T09:27:28",5,"","未说明",{"notes":88,"python":86,"dependencies":89},"该仓库是一个知识蒸馏（Knowledge Distillation）领域的论文和资源列表（Awesome List），并非一个可直接运行的单一软件工具或框架。README 中列出了数百篇相关学术论文及其分类，部分条目附带了独立的外部代码链接。因此，具体的运行环境、依赖库及硬件需求取决于用户选择复现哪一篇特定论文或使用哪个子项目，本仓库本身无统一的环境要求。",[],[13],[92,93,94,95,96,97],"kd","knowldge-distillation","distillation","deep-learning","transfer-learning","model-compression","2026-03-27T02:49:30.150509","2026-04-06T07:12:39.068554",[101,106,111,116,121,126],{"id":102,"question_zh":103,"answer_zh":104,"source_url":105},16771,"为什么该仓库中关于推荐系统（RecSys）的知识蒸馏论文较少？","本仓库主要收集计算机视觉（CV）相关的论文。推荐系统中使用的知识蒸馏方法基本上是从 CV 领域迁移过去的，方法上不算很新，也没有难以解决的技术问题，可能因为创新度不够所以相关论文较少。如果您发现有新的相关论文，欢迎贡献。","https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation\u002Fissues\u002F6",{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},16772,"我想为仓库添加“推荐系统中的知识蒸馏”分类，应该如何操作？","建议您不要创建新分支，而是直接在 README 文件的“应用（Application）”部分，参照现有格式（例如在'for NLP'之后）创建一个新的子章节。如果您已经整理了一些论文，可以将它们直接添加到仓库中或提供给维护者。","https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation\u002Fissues\u002F9",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},16773,"仓库中的论文条目是否包含项目代码链接？","目前所有论文的 PDF 均可在 arXiv 上下载，因此未单独列出文章链接以保持页面美观。维护者计划后续会补充那些已开源代码的项目链接。","https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation\u002Fissues\u002F1",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},16774,"如何向该仓库贡献新的知识蒸馏论文？","您可以通过提交 Issue 的方式推荐新论文。请在 Issue 中提供论文的完整标题、作者、发表会议\u002F期刊以及链接（如 arXiv 或 GitHub 代码库地址），并说明建议将其归类到哪个板块（如结构化蒸馏、应用类等）。维护者确认后会将其加入列表。","https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation\u002Fissues\u002F3",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},16775,"如果发现某篇已收录论文发布了代码，如何更新仓库信息？","您可以提交一个 Issue，明确指出论文名称（例如 \"DAFL: Data-Free Learning of Student Networks\"）及其对应的代码仓库 URL（例如 https:\u002F\u002Fgithub.com\u002Fhuawei-noah\u002FData-Efficient-Model-Compression\u002Ftree\u002Fmaster\u002FDAFL），请求维护者更新该条目的代码链接。","https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation\u002Fissues\u002F15",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},16776,"有哪些最近的知识蒸馏论文值得关注和收录？","近期值得关注的论文包括：《Knowledge Distillation as Efficient Pre-training》、《Knowledge Distillation via the Target-aware Transformer》、《Knowledge Distillation with the Reused Teacher Classifier》、《Decoupled Knowledge Distillation》以及《Self-Distillation from the Last Mini-Batch for Consistency Regularization》。这些论文涵盖了预训练效率、Transformer 架构应用及一致性正则化等方向。","https:\u002F\u002Fgithub.com\u002FFLHonker\u002FAwesome-Knowledge-Distillation\u002Fissues\u002F21",[]]