[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Vanint--Awesome-LongTailed-Learning":3,"tool-Vanint--Awesome-LongTailed-Learning":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,2,"2026-04-05T10:45:23",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[19,13,20,18],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[20,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},2234,"scikit-learn","scikit-learn\u002Fscikit-learn","scikit-learn 是一个基于 Python 构建的开源机器学习库，依托于 SciPy、NumPy 等科学计算生态，旨在让机器学习变得简单高效。它提供了一套统一且简洁的接口，涵盖了从数据预处理、特征工程到模型训练、评估及选择的全流程工具，内置了包括线性回归、支持向量机、随机森林、聚类等在内的丰富经典算法。\n\n对于希望快速验证想法或构建原型的数据科学家、研究人员以及 Python 开发者而言，scikit-learn 是不可或缺的基础设施。它有效解决了机器学习入门门槛高、算法实现复杂以及不同模型间调用方式不统一的痛点，让用户无需重复造轮子，只需几行代码即可调用成熟的算法解决分类、回归、聚类等实际问题。\n\n其核心技术亮点在于高度一致的 API 设计风格，所有估算器（Estimator）均遵循相同的调用逻辑，极大地降低了学习成本并提升了代码的可读性与可维护性。此外，它还提供了强大的模型选择与评估工具，如交叉验证和网格搜索，帮助用户系统地优化模型性能。作为一个由全球志愿者共同维护的成熟项目，scikit-learn 以其稳定性、详尽的文档和活跃的社区支持，成为连接理论学习与工业级应用的最",65628,"2026-04-05T10:10:46",[20,18,14],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":10,"last_commit_at":63,"category_tags":64,"status":22},3364,"keras","keras-team\u002Fkeras","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。",63927,"2026-04-04T15:24:37",[20,14,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":71,"readme_en":72,"readme_zh":73,"quickstart_zh":74,"use_case_zh":75,"hero_image_url":76,"owner_login":77,"owner_name":78,"owner_avatar_url":79,"owner_bio":80,"owner_company":81,"owner_location":82,"owner_email":83,"owner_twitter":80,"owner_website":84,"owner_url":85,"languages":86,"stars":95,"forks":96,"last_commit_at":97,"license":80,"difficulty_score":98,"env_os":99,"env_gpu":100,"env_ram":100,"env_deps":101,"category_tags":104,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":22,"created_at":105,"updated_at":106,"faqs":107,"releases":108},3267,"Vanint\u002FAwesome-LongTailed-Learning","Awesome-LongTailed-Learning","A codebase and a curated list of awesome deep long-tailed learning (TPAMI 2023).","Awesome-LongTailed-Learning 是一个专为深度学习领域打造的开源资源库与代码合集，源自发表于顶级期刊 TPAMI 2023 的综述论文《Deep Long-Tailed Learning: A Survey》。它主要致力于解决人工智能训练中常见的“长尾分布”难题：即在现实数据集中，少数类别样本丰富，而绝大多数类别样本稀缺，导致模型对稀有类别的识别能力严重不足。\n\n该项目不仅系统梳理了该领域的最新进展，将现有方法科学地划分为重采样、信息增强、模块改进等三大类及九个子方向，还提供了一个持续更新的顶会论文清单（涵盖 ICCV 等会议），并附带了多种主流算法的代码实现与实证分析。其独特的技术亮点在于清晰的分类体系（如解耦训练、对数调整、表示学习等）以及从理论综述到代码复现的一站式支持。\n\nAwesome-LongTailed-Learning 非常适合 AI 研究人员、算法工程师及计算机视觉开发者使用。无论是希望快速了解长尾学习前沿动态的学者，还是需要在实际项目中提升模型对稀缺样本识别能力的开发者，都能从中获得宝贵的理论指引和实用的代码参考，从而高效推动相关技术的落地与","Awesome-LongTailed-Learning 是一个专为深度学习领域打造的开源资源库与代码合集，源自发表于顶级期刊 TPAMI 2023 的综述论文《Deep Long-Tailed Learning: A Survey》。它主要致力于解决人工智能训练中常见的“长尾分布”难题：即在现实数据集中，少数类别样本丰富，而绝大多数类别样本稀缺，导致模型对稀有类别的识别能力严重不足。\n\n该项目不仅系统梳理了该领域的最新进展，将现有方法科学地划分为重采样、信息增强、模块改进等三大类及九个子方向，还提供了一个持续更新的顶会论文清单（涵盖 ICCV 等会议），并附带了多种主流算法的代码实现与实证分析。其独特的技术亮点在于清晰的分类体系（如解耦训练、对数调整、表示学习等）以及从理论综述到代码复现的一站式支持。\n\nAwesome-LongTailed-Learning 非常适合 AI 研究人员、算法工程师及计算机视觉开发者使用。无论是希望快速了解长尾学习前沿动态的学者，还是需要在实际项目中提升模型对稀缺样本识别能力的开发者，都能从中获得宝贵的理论指引和实用的代码参考，从而高效推动相关技术的落地与研究。","# Awesome Long-Tailed Learning (TPAMI 2023)\n\n  [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n  [![PRs Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg?style=flat-square)](http:\u002F\u002Fmakeapullrequest.com)\n\n  We released *[Deep Long-Tailed Learning: A Survey](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04596.pdf)* and **our codebase** to the community. In this survey, we reviewed recent advances in long-tailed learning based on deep neural networks. Existing long-tailed learning studies can be grouped into three main categories (i.e., class re-balancing, information augmentation and module improvement), which can be further classified into nine sub-categories (as shown in the below figure). We also provided empirical analysis for several state-of-the-art methods by evaluating to what extent they address the issue of class imbalance. We concluded the survey by highlighting important applications of deep long-tailed learning and identifying several promising directions for future research. \n\n  After completing this survey, we decided to release our long-tailed learning resources and codebase, hoping to push the development of the community. If you have any questions or suggestions, please feel free to contact us.\n\n    \n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_89843bf54774.png\" width=1000>\n  \u003C\u002Fp>\n  \n\n  ## 1. Type of Long-tailed Learning\n\n  | Symbol | `Sampling`  |          `CSL`          |       `LA`       |       `TL`        |       `Aug`       |\n  | :----- | :---------: | :---------------------: | :--------------: | :---------------: | :---------------: |\n  | Type   | Re-sampling | Class-sensitive Learning | Logit Adjustment | Transfer Learning | Data Augmentation |\n\n  | Symbol |          `RL`           |       `CD`        |        `DT`        |    `Ensemble`     |   `other`   |\n  | :----- | :---------------------: | :---------------: | :----------------: | :---------------: | :---------: |\n  | Type   | Representation Learning | Classifier Design | Decoupled Training | Ensemble Learning | Other Types |\n\n  ## 2. Top-tier Conference Papers (Updated on 2025 June)\n\n\n\n  ### 2025\n\n  | Title                  |  Venue  | Year |       Type       |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: |\n  | [Supervised Exploratory Learning for Long-Tailed Visual Recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FJian_Supervised_Exploratory_Learning_for_Long-Tailed_Visual_Recognition_ICCV_2025_paper.pdf) | ICCV | 2025 |  `Sampling`,`RL`  |       | \n  | [You Are Your Own Best Teacher: Achieving Centralized-level Performance in Federated Learning under Heterogeneous and Long-tailed Data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FYan_You_Are_Your_Own_Best_Teacher_Achieving_Centralized-level_Performance_in_ICCV_2025_paper.pdf) | ICCV | 2025 |  `LA`,`TL`,`RL`  |    [Official](https:\u002F\u002Fgithub.com\u002Fshanss132\u002FFedYoYo)        | \n  | [Boosting Class Representation via Semantically Related Instances for Robust Long-Tailed Learning with Noisy Labels](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FLi_Boosting_Class_Representation_via_Semantically_Related_Instances_for_Robust_Long-Tailed_ICCV_2025_paper.pdf) | ICCV | 2025 |  `TL`,`Ensemble`,`other` |    [Official](https:\u002F\u002Fgithub.com\u002Fyhliml\u002FIBC)        | \n  | [Long-Tailed Classification with Multi-Granularity Semantics](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FLiu_Long-Tailed_Classification_with_Multi-Granularity_Semantics_ICCV_2025_paper.pdf) | ICCV | 2025 |  `Aug`,`RL`  |        | \n  | [AMD: Adaptive Momentum and Decoupled Contrastive Learning Framework for Robust Long-Tail Trajectory Prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FRao_AMD_Adaptive_Momentum_and_Decoupled_Contrastive_Learning_Framework_for_Robust_ICCV_2025_paper.pdf) | ICCV | 2025 |  `Aug`,`RL`  |       | \n  | [Generative Active Learning for Long-tail Trajectory Prediction via Controllable Diffusion Model](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FPark_Generative_Active_Learning_for_Long-tail_Trajectory_Prediction_via_Controllable_Diffusion_ICCV_2025_paper.pdf) | ICCV | 2025 |  `Aug`,`other`  |        \n  | [Category-Specific Selective Feature Enhancement for Long-Tailed Multi-Label Image Classification](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FDu_Category-Specific_Selective_Feature_Enhancement_for_Long-Tailed_Multi-Label_Image_Classification_ICCV_2025_paper.pdf) | ICCV | 2025 |  `RL`  |        | \n  | [Overcoming Dual Drift for Continual Long-Tailed Visual Question Answering](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FZhang_Overcoming_Dual_Drift_for_Continual_Long-Tailed_Visual_Question_Answering_ICCV_2025_paper.pdf) | ICCV | 2025 |  `RL`,`CD`  |        | \n  | [A Tiny Change, A Giant Leap: Long-Tailed Class-Incremental Learning via Geometric Prototype Alignment](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FLai_A_Tiny_Change_A_Giant_Leap_Long-Tailed_Class-Incremental_Learning_via_ICCV_2025_paper.pdf) | ICCV | 2025 |  `RL`,`CD`  |    [Official](https:\u002F\u002Fgithub.com\u002Flaixinyi023\u002FGeometric-Prototype-Alignment)        | \n  | [Toward Long-Tailed Online Anomaly Detection through Class-Agnostic Concepts](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FYang_Toward_Long-Tailed_Online_Anomaly_Detection_through_Class-Agnostic_Concepts_ICCV_2025_paper.pdf) | ICCV | 2025 |  `other`  |    [Official](https:\u002F\u002Fzenodo.org\u002Frecords\u002F16283853)        | \n  | [Rethinking the Bias of Foundation Model under Long-tailed Distribution](https:\u002F\u002Fopenreview.net\u002Fattachment?id=jSoNlHD9qA&name=pdf) | ICML | 2025 |  `LA`  |        | \n  | [Advancing Personalized Learning with Neural Collapse for Long-Tail Challenge](https:\u002F\u002Fopenreview.net\u002Fpdf?id=W7phL2sNif) | ICML | 2025 |  `RL` |    [Official](https:\u002F\u002Fgithub.com\u002Fllm4edu\u002FNCAL_ICML2025)    | \n  | [Focal-SAM: Focal Sharpness-Aware Minimization for Long-Tailed Classification](https:\u002F\u002Fopenreview.net\u002Fattachment?id=lCk4PZto8T&name=pdf) | ICML | 2025 |  `RL`  |        | \n  | [A Square Peg in a Square Hole: Meta-Expert for Long-Tailed Semi-Supervised Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=h0ZeiDRN8A) | ICML | 2025 |  `RL`,`Ensemble` |        | \n  | [Balancing Model Efficiency and Performance: Adaptive Pruner for Long-tailed Data](https:\u002F\u002Fopenreview.net\u002Fpdf?id=1d1ssNedLv) | ICML | 2025 |  `other` |    [Official](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002FLT-VOTE)    | \n  | [TailedCore : Few-Shot Sampling for Unsupervised Long-Tail Noisy Anomaly Detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FJung_TailedCore_Few-Shot_Sampling_for_Unsupervised_Long-Tail_Noisy_Anomaly_Detection_CVPR_2025_paper.pdf) | CVPR | 2025 |   `Sampling` |    [Official](https:\u002F\u002Fgithub.com\u002Fjungyg\u002FTailedCore)    | \n  | [Fractal Calibration for Long-Tailed Object Detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FAlexandridis_Fractal_Calibration_for_Long-tailed_Object_Detection_CVPR_2025_paper.pdf) | CVPR | 2025 |   `LA`,`other` |    [Official](https:\u002F\u002Fgithub.com\u002Fkostas1515\u002FFRACAL)    | \n  | [SimLTD: Simple Supervised and Semi-Supervised Long-Tailed Object Detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FTran_SimLTD_Simple_Supervised_and_Semi-Supervised_Long-Tailed_Object_Detection_CVPR_2025_paper.pdf) | CVPR | 2025 |   `TL` |       | \n  | [Learning from Neighbors: Category Extrapolation for Long-Tail Learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FZhao_Learning_from_Neighbors_Category_Extrapolation_for_Long-Tail_Learning_CVPR_2025_paper.pdf) | CVPR | 2025 |   `TL`,`Aug`,`RL` |       | \n  | [Distilling Long-tailed Datasets](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FZhao_Distilling_Long-tailed_Datasets_CVPR_2025_paper.pdf) | CVPR | 2025 |   `TL`,`DT` |    [Official](https:\u002F\u002Fgithub.com\u002Fichbill\u002FLTDD)    | \n  | [Search and Detect: Training-Free Long Tail Object Detection via Web-Image Retrieval](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FSidhu_Search_and_Detect_Training-Free_Long_Tail_Object_Detection_via_Web-Image_CVPR_2025_paper.pdf) | CVPR | 2025 |   `other` |    [Official](https:\u002F\u002Fgithub.com\u002FMankeerat\u002FSearchDet)    | \n  | [TAET: Two-Stage Adversarial Equalization Training on Long-Tailed Distributions](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FYu-Hang_TAET_Two-Stage_Adversarial_Equalization_Training_on_Long-Tailed__Distributions_CVPR_2025_paper.pdf) | CVPR | 2025 |   `other` |    [Official](https:\u002F\u002Fgithub.com\u002FBuhuiOK\u002FTAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions)    | \n  | [Pursuing Better Decision Boundaries for Long-Tailed Object Detection via Category Information Amount](https:\u002F\u002Fopenreview.net\u002Fpdf?id=LW55JrLYPg) | ICLR | 2025 | `CSL` |      | \n  | [Rethinking Classifier Re-Training in Long-Tailed Recognition: Label Over-Smooth Can Balance](https:\u002F\u002Fopenreview.net\u002Fpdf?id=OeKp3AdiVO) | ICLR | 2025 | `LA`,`CD` |      | \n  | [Long-tailed Adversarial Training with Self-Distillation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=vM94dZiqx4) | ICLR | 2025 | `TL`,`other` |      | \n  | [ConMix: Contrastive Mixup at Representation Level for Long-tailed Deep Clustering](https:\u002F\u002Fopenreview.net\u002Fpdf?id=3lH8WT0fhu) | ICLR | 2025 | `Aug`,`RL` |    [Official](https:\u002F\u002Fgithub.com\u002FLZX-001\u002FConMix)    | \n  | [Geometry of Long-Tailed Representation Learning: Rebalancing Features for Skewed Distributions](https:\u002F\u002Fopenreview.net\u002Fpdf?id=GySIAKEwtZ) | ICLR | 2025 | `RL`  |      | \n\n  ### 2024\n\n  | Title                                                        |  Venue  | Year |       Type       |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: |\n  | [Taming the long tail in human mobility prediction](https:\u002F\u002Fopenreview.net\u002Fpdf?id=wT2TIfHKp8) | NeurIPS | 2024 | `Sampling`,`LA` |      | \n  | [Long-tailed object detection pre-training: dynamic rebalancing contrastive learning with dual reconstruction](https:\u002F\u002Fopenreview.net\u002Fpdf?id=mGz3Jux9wS) | NeurIPS | 2024 | `Sampling`,`RL` |      | \n  | [AUCSeg: AUC-oriented pixel-level long-tail semantic segmentation](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F38fa941c480de3259c3508aaf0c968eed971b269.pdf) | NeurIPS | 2024 | `Sampling`,`other` |   [Official](https:\u002F\u002Fgithub.com\u002Fboyuh\u002FAUCSeg)       | \n  | [Continuous contrastive learning for long-tailed semi-supervised recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=PaqJ71zf1M) | NeurIPS | 2024 | `CSL`,`LA`,`RL` |   [Official](https:\u002F\u002Fgithub.com\u002Fzhouzihao11\u002FCCL)       | \n  | [Long-tailed out-of-distribution detection via normalized outlier distribution adaptation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=cesWi7mMLY) | NeurIPS | 2024 | `LA`  |   [Official](https:\u002F\u002Fgithub.com\u002Fmala-lab\u002FAdaptOD)       | \n  | [LLM-ESR: Large language models enhancement for long-tailed sequential recommendation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=xojbzSYIVS) | NeurIPS | 2024 | `TL`,`Ensemble` |   [Official](https:\u002F\u002Fgithub.com\u002FApplied-Machine-Learning-Lab\u002FLLM-ESR)      | \n  | [DiffuLT: Diffusion for long-tail recognition without external knowledge](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Kcsj9FGnKR) | NeurIPS | 2024 | `Aug` |      | \n  | [LLM-AutoDA: Large language model-driven automatic data augmentation for long-tailed problems](https:\u002F\u002Fopenreview.net\u002Fpdf?id=VpuOuZOVhP) | NeurIPS | 2024 | `Aug` |     [Official](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002FLLM-LT-AUG)       | \n  | [Breaking long-tailed learning bottlenecks: A controllable paradigm with hypernetwork-generated diverse experts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=WpPNVPAEyv) | NeurIPS | 2024 | `Ensemble` |    [Official](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002FPRL)    | \n  | [Once Read is Enough: Domain-specific pretraining-free language models with cluster-guided sparse experts for long-tail domain knowledge](https:\u002F\u002Fopenreview.net\u002Fpdf?id=manHbkpIW6) | NeurIPS | 2024 | `Ensemble` |        | \n  | [Improving visual prompt tuning by Gaussian neighborhood minimization for long-tailed visual recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=7lMN6xoBjb) | NeurIPS | 2024 | `other` |   [Official](https:\u002F\u002Fgithub.com\u002FKeke921\u002FGNM-PT)       | \n  | [Towards heterogeneous long-tailed learning: Benchmarking, Metrics, and Toolbox](https:\u002F\u002Fopenreview.net\u002Fpdf?id=plIuBfYpXj) | NeurIPS | 2024 | `other` |   [Official](https:\u002F\u002Fgithub.com\u002FSSSKJ\u002FHeroLT)       |  \n  | [What makes CLIP more robust to long-tailed pre-training data? A controlled study for transferable insights](https:\u002F\u002Fopenreview.net\u002Fpdf?id=PcyioHOmjq) | NeurIPS | 2024 | `other` |   [Official](https:\u002F\u002Fgithub.com\u002FCVMI-Lab\u002Fclip-beyond-tail)       |  \n  | [Flexible distribution alignment: Towards long-tailed semi-supervised learning with proper calibration](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F07132.pdf) | ECCV | 2024 | `LA` |   [Official](https:\u002F\u002Fgithub.com\u002Femasa\u002FADELLO-LTSSL)       |  \n  | [Long-tail temporal action segmentation with group-wise temporal logit adjustment](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F04389.pdf) | ECCV | 2024 | `LA` |   [Official](https:\u002F\u002Fgithub.com\u002Fpangzhan27\u002FGTLA)       |  \n  | [Distribution-aware robust learning from long-tailed data with noisy labels](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F02177.pdf) | ECCV | 2024 | `Aug`,`RL` |   [Official](https:\u002F\u002Fgithub.com\u002FJaesoonBaik1213\u002FDaSC)       |  \n  | [Distributionally robust loss for long-tailed multi-label image classification](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F04926.pdf) | ECCV | 2024 | `RL` |   [Official](https:\u002F\u002Fgithub.com\u002FKunmonkey\u002FDR-Loss)       |  \n  | [Rectify the regression bias in long-tailed object detection](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F04069.pdf) | ECCV | 2024 | `RL`,`CD` |       |  \n  | [LTRL: Boosting long-tail recognition via reflective learning](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F08380.pdf) | ECCV | 2024 | `other` |   [Official](https:\u002F\u002Fgithub.com\u002Ffistyee\u002FLTRL)       |  \n  | [Learning label shift correction for test-agnostic long-tailed recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=J3xYTh6xtL) | ICML | 2024 | `LA` |   [Official](https:\u002F\u002Fgithub.com\u002FStomach-ache\u002Flabel-shift-correction)      |  \n  | [Generative active learning for long-tailed instance segmentation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ofXRBPtol3) | ICML | 2024 | `Aug` |    |  \n  | [ELTA: An enhancer against long-tail for aesthetics-oriented models](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dhrNfAJAH6) | ICML | 2024 | `Aug` |   [Official](https:\u002F\u002Fgithub.com\u002Fwoshidandan\u002FLong-Tail-image-aesthetics-and-quality-assessment)       |  \n  | [Distribution alignment optimization through neural collapse for long-tailed classification](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hjwx3H6Vci) | ICML | 2024 | `RL` |    [Official](https:\u002F\u002Fgithub.com\u002FJintongGao\u002FDisA)      | \n  | [Long-tail learning with foundation model: Heavy fine-tuning hurts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ccSSKTz9LX) | ICML | 2024 | `CD` |    [Official](https:\u002F\u002Fgithub.com\u002Fshijxcs\u002FLIFT)      |  \n  | [SimPro: A simple probabilistic framework towards realistic long-tailed semi-supervised learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NbOlmrB59Z) | ICML | 2024 | `CD` |    [Official](https:\u002F\u002Fgithub.com\u002FLeapLabTHU\u002FSimPro)      | \n  | [Harnessing hierarchical label distribution variations in test agnostic long-tail recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ebt5BfRHcW) | ICML | 2024 | `Ensemble` |    [Official](https:\u002F\u002Fgithub.com\u002Fscongl\u002FDirMixE)      |  \n  | [Two Fists, One Heart: Multi-objective optimization based strategy fusion for long-tailed learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=MEZydkOr3l) | ICML | 2024 | `other` |    [Official](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002Ftorch-MOOSF)      |  \n  | [BEM: Balanced and entropy-based mix for long-tailed semi-supervised learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FZheng_BEM_Balanced_and_Entropy-based_Mix_for_Long-Tailed_Semi-Supervised_Learning_CVPR_2024_paper.pdf) | CVPR | 2024 | `CSL`,`TL`,`CD` |  [Official](https:\u002F\u002Fgithub.com\u002FZhenghongwei0929\u002FCVPR2024-BEM)         |\n  | [DeiT-LT: Distillation strikes back for vision transformer training on long-tailed datasets](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FRangwani_DeiT-LT_Distillation_Strikes_Back_for_Vision_Transformer_Training_on_Long-Tailed_CVPR_2024_paper.pdf) | CVPR | 2024 | `CSL`,`TL`,`CD` |  [Official](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002FDeiT-LT)         |\n  | [Revisiting adversarial training under long-tailed distributions](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FYue_Revisiting_Adversarial_Training_Under_Long-Tailed_Distributions_CVPR_2024_paper.pdf) | CVPR | 2024 | `CSL`,`Aug` |  [Official](https:\u002F\u002Fgithub.com\u002FNISPLab\u002FAT-BSL)         |\n  | [Long-tailed anomaly detection with learnable class names](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FHo_Long-Tailed_Anomaly_Detection_with_Learnable_Class_Names_CVPR_2024_paper.pdf) | CVPR | 2024 | `TL`,`Aug` |        [Official](https:\u002F\u002Fzenodo.org\u002Frecords\u002F10854201)    |  \n  | [LTGC: Long-tail recognition via leveraging LLMs-driven generated content](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FZhao_LTGC_Long-tail_Recognition_via_Leveraging_LLMs-driven_Generated_Content_CVPR_2024_paper.pdf) | CVPR | 2024 | `Aug` |   [Official](https:\u002F\u002Fgithub.com\u002Fdialogueeeeee\u002FLTGC)       |  \n  | [Delving into the trajectory long-tail distribution for multi-object tracking](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FChen_Delving_into_the_Trajectory_Long-tail_Distribution_for_Muti-object_Tracking_CVPR_2024_paper.pdf) | CVPR | 2024 | `Aug`,`CD` |   [Official](https:\u002F\u002Fgithub.com\u002Fchen-si-jia\u002FTrajectory-Long-tail-Distribution-for-MOT)       |  \n  | [Long-tail class incremental learning via independent sub-prototype construction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FWang_Long-Tail_Class_Incremental_Learning_via_Independent_Sub-prototype_Construction_CVPR_2024_paper.pdf) | CVPR | 2024 | `RL` |       |\n  | [Long-tailed diffusion models with oriented calibration](https:\u002F\u002Fopenreview.net\u002Fattachment?id=NW2s5XXwXU&name=pdf) | ICLR | 2024 | `Sampling`,`CSL`,`TL` |       [Official](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FOC_LT)       |\n  | [Kill two birds with one stone: Rethinking data augmentation for deep long-tailed learning](https:\u002F\u002Fopenreview.net\u002Fattachment?id=RzY9qQHUXy&name=pdf) | ICLR | 2024 | `Aug` |       [Official](https:\u002F\u002Fgithub.com\u002Fpongkun\u002FCode-for-DODA)       |\n  | [FedLoGe: Joint local and generic federated learning under long-tailed data](https:\u002F\u002Fopenreview.net\u002Fattachment?id=V3j5d0GQgH&name=pdf) | ICLR | 2024 | `RL` |       [Official](https:\u002F\u002Fgithub.com\u002FZackZikaiXiao\u002FFedLoGe)       |\n  | [Learning to reject meets long-tail learning](https:\u002F\u002Fopenreview.net\u002Fattachment?id=ta26LtNq2r&name=pdf) | ICLR | 2024 | `CD` |      |\n  | [Exploring weight balancing on long-tailed recognition problem](https:\u002F\u002Fopenreview.net\u002Fattachment?id=JsnR0YO4Fq&name=pdf) | ICLR | 2024 | `DT` |       [Official](https:\u002F\u002Fgithub.com\u002FHN410\u002FExploring-Weight-Balancing-on-Long-Tailed-Recognition-Problem)       |\n  | [Pareto deep long-tailed recognition: A conflict-averse solution](https:\u002F\u002Fopenreview.net\u002Fattachment?id=b66P1u0k15&name=pdf) | ICLR | 2024 | `other` |       [Official](https:\u002F\u002Fgithub.com\u002Fzzpustc\u002FPLOT)       |\n\n  ### 2023\n\n  | Title                                                        |  Venue  | Year |       Type       |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: | \n  | [How re-sampling helps for long-tail learning?](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Feeffa70bcbbd43f6bd067edebc6595e8-Paper-Conference.pdf) | NeurIPS | 2023 | `Sampling`,`Aug` |     [Official](https:\u002F\u002Fwww.lamda.nju.edu.cn\u002Fcode_CSA.ashx)        |\n  | [Fed-GraB: Federated long-tailed learning with self-adjusting gradient balancer](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Ff4b8ddb9b1aa3cb11462d64a70b84db2-Paper-Conference.pdf) | NeurIPS | 2023 | `CSL` |     [Official](https:\u002F\u002Fgithub.com\u002FZackZikaiXiao\u002FFedGraB)        |\n  | [Enhancing minority classes by mixing: an adaptative optimal transport approach for long-tailed classification](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Fbdabb5d4262bcfb6a1d529d690a6c82b-Paper-Conference.pdf) | NeurIPS | 2023 | `Aug` |     [Official](https:\u002F\u002Fgithub.com\u002FJintongGao\u002FEnhancing-Minority-Classes-by-Mixing) |\n  | [Learning from rich semantics and coarse locations for long-tailed object detection](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Ff5fcd88d3deb97bb62559208cfa0ab62-Paper-Conference.pdf) | NeurIPS | 2023 | `RL` |     [Official](https:\u002F\u002Fgithub.com\u002FMengLcool\u002FRichSem)        |\n  | [Generalized test utilities for long-tail performance in extreme multi-label classification](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002F46994b3d6dd0fd5fca5f780af6259db5-Paper-Conference.pdf) | NeurIPS | 2023 | `other` |          |\n  | [Label-noise learning with intrinsically long-tailed data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FLu_Label-Noise_Learning_with_Intrinsically_Long-Tailed_Data_ICCV_2023_paper.pdf) | ICCV | 2023 | `Sampling` |     [Official](https:\u002F\u002Fgithub.com\u002FWakings\u002FTABASCO)        |\n  | [MDCS: More diverse experts with consistency self-distillation for long-tailed recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhao_MDCS_More_Diverse_Experts_with_Consistency_Self-distillation_for_Long-tailed_Recognition_ICCV_2023_paper.pdf) | ICCV | 2023 | `Sampling`,`TL`,`Ensemble` |     [Official](https:\u002F\u002Fgithub.com\u002Ffistyee\u002FMDCS)        |\n  | [Subclass-balancing Contrastive Learning for Long-tailed Recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FHou_Subclass-balancing_Contrastive_Learning_for_Long-tailed_Recognition_ICCV_2023_paper.pdf) | ICCV | 2023 | `Sampling`,`RL`|     [Official](https:\u002F\u002Fgithub.com\u002FJackHck\u002FSBCL)        |\n  | [When Noisy Labels Meet Long Tail Dilemmas: A Representation Calibration Method](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhang_When_Noisy_Labels_Meet_Long_Tail_Dilemmas_A_Representation_Calibration_ICCV_2023_paper.pdf) | ICCV | 2023 | `Sampling`,`RL`,`DT`|          |\n  | [AREA: Adaptive reweighting via effective area for long-tailed classification](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FChen_AREA_Adaptive_Reweighting_via_Effective_Area_for_Long-Tailed_Classification_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL` |     [Official](https:\u002F\u002Fgithub.com\u002Fxiaohua-chen\u002FAREA)        |\n  | [Reconciling object-level and global-level objectives for long-tail detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhang_Reconciling_Object-Level_and_Global-Level_Objectives_for_Long-Tail_Detection_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL` |     [Official](https:\u002F\u002Fgithub.com\u002FEricZsy\u002FROG)        |\n  | [Local and global logit adjustments for long-tailed learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FTao_Local_and_Global_Logit_Adjustments_for_Long-Tailed_Learning_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL`,`LA`,`Ensemble` |          |\n  | [Learning in imperfect environment: Multi-label classification with long-tailed distribution and partial labels](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhang_Learning_in_Imperfect_Environment_Multi-Label_Classification_with_Long-Tailed_Distribution_and_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL`,`TL` |     [Official](https:\u002F\u002Fgithub.com\u002Fwannature\u002FCOMIC)        |\n  | [Global balanced experts for federated long-tailed learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZeng_Global_Balanced_Experts_for_Federated_Long-Tailed_Learning_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL`, `Ensemble` |     [Official](https:\u002F\u002Fgithub.com\u002FSpinozaaa\u002FFederated-Long-tailed-Learning)       |\n  | [Boosting long-tailed object detection via step-wise learning on smooth-tail data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FDong_Boosting_Long-tailed_Object_Detection_via_Step-wise_Learning_on_Smooth-tail_Data_ICCV_2023_paper.pdf) | ICCV | 2023 | `Ensemble` |         |\n  | [Long-tailed recognition by mutual information maximization between latent features and ground-truth labels](https:\u002F\u002Fopenreview.net\u002Fpdf?id=KqNX6VOqnJ) | ICML | 2023 | `CSL`,`RL` |     [Official](https:\u002F\u002Fgithub.com\u002Fbluecdm\u002FLong-tailed-recognition)        |\n  | [Large language models struggle to learn long-tail knowledge](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sfdKdeczaw) | ICML | 2023 | `Aug` |        |\n  | [Feature directions matter: Long-tailed learning via rotated balanced representation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dTgxiMW6wr0) | ICML | 2023 | `RL` |        |\n  | [Wrapped Cauchy distributed angular softmax for long-tailed visual recognition](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fhan23a\u002Fhan23a.pdf) | ICML | 2023 | `RL`,`CD` |  [Official](https:\u002F\u002Fgithub.com\u002Fboranhan\u002Fwcdas_code)        | \n  | [Rethinking image super resolution from long-tailed distribution learning perspective](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FGou_Rethinking_Image_Super_Resolution_From_Long-Tailed_Distribution_Learning_Perspective_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL` |      |\n  | [Transfer knowledge from head to tail: Uncertainty calibration under long-tailed distribution](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FChen_Transfer_Knowledge_From_Head_to_Tail_Uncertainty_Calibration_Under_Long-Tailed_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL`,`TL`  |  [Official](https:\u002F\u002Fgithub.com\u002FJiahaoChen1\u002FCalibration)        |\n  | [Towards realistic long-tailed semi-supervised learning: Consistency is all you need](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWei_Towards_Realistic_Long-Tailed_Semi-Supervised_Learning_Consistency_Is_All_You_Need_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL`,`TL`,`Ensemble`  |  [Official](https:\u002F\u002Fgithub.com\u002FGank0078\u002FACR)        |\n  | [Global and local mixture consistency cumulative learning for long-tailed visual recognitions](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FDu_Global_and_Local_Mixture_Consistency_Cumulative_Learning_for_Long-Tailed_Visual_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL`,`RL` |  [Official](https:\u002F\u002Fgithub.com\u002Fynu-yangpeng\u002FGLMC)        |\n  | [Long-tailed visual recognition via self-heterogeneous integration with knowledge excavation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FJin_Long-Tailed_Visual_Recognition_via_Self-Heterogeneous_Integration_With_Knowledge_Excavation_CVPR_2023_paper.pdf) | CVPR | 2023 | `TL`,`Ensemble`  |  [Official](https:\u002F\u002Fgithub.com\u002Fjinyan-06\u002FSHIKE)        |\n  | [Balancing logit variation for long-tailed semantic segmentation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWang_Balancing_Logit_Variation_for_Long-Tailed_Semantic_Segmentation_CVPR_2023_paper.pdf) | CVPR | 2023 | `Aug`  |  [Official](https:\u002F\u002Fgithub.com\u002Fgrantword8\u002FBLV)        |\n  | [Use your head: Improving long-tail video recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FPerrett_Use_Your_Head_Improving_Long-Tail_Video_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `Aug`  |  [Official](https:\u002F\u002Fgithub.com\u002Ftobyperrett\u002Flmr)        |\n  | [FCC: Feature clusters compression for long-tailed visual recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FLi_FCC_Feature_Clusters_Compression_for_Long-Tailed_Visual_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `RL`  |  [Official](https:\u002F\u002Fgithub.com\u002Flijian16\u002FFCC)        |\n  | [FEND: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWang_FEND_A_Future_Enhanced_Distribution-Aware_Contrastive_Learning_Framework_for_Long-Tail_CVPR_2023_paper.pdf) | CVPR | 2023 | `RL`  |     | \n  | [SuperDisco: Super-class discovery improves visual recognition for the long-tail](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FDu_SuperDisco_Super-Class_Discovery_Improves_Visual_Recognition_for_the_Long-Tail_CVPR_2023_paper.pdf) | CVPR | 2023 | `RL`  |     | \n  | [Class-conditional sharpness-aware minimization for deep long-tailed recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FZhou_Class-Conditional_Sharpness-Aware_Minimization_for_Deep_Long-Tailed_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `DT`  |  [Official](https:\u002F\u002Fgithub.com\u002Fzzpustc\u002FCC-SAM)        |\n  | [Balanced product of calibrated experts for long-tailed recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FAimar_Balanced_Product_of_Calibrated_Experts_for_Long-Tailed_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `Ensemble`  |  [Official](https:\u002F\u002Fgithub.com\u002Femasa\u002FBalPoE-CalibratedLT)        |\n  | [No one left behind: Improving the worst categories in long-tailed learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FDu_No_One_Left_Behind_Improving_the_Worst_Categories_in_Long-Tailed_CVPR_2023_paper.pdf) | CVPR | 2023 | `Ensemble`  |    |\n  | [On the effectiveness of out-of-distribution data in self-supervised long-tail learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=v8JIQdiN9Sh) | ICLR | 2023 | `Sampling`,`TL`,`Aug`  |  [Official](https:\u002F\u002Fgithub.com\u002FJianhongBai\u002FCOLT)        |\n  | [LPT: Long-tailed prompt tuning for image classification](https:\u002F\u002Fopenreview.net\u002Fpdf?id=8pOVAeo8ie) | ICLR | 2023 | `Sampling`,`TL`,`Other`  |  [Official](https:\u002F\u002Fgithub.com\u002FDongSky\u002FLPT)        |\n  | [Long-tailed partial label learning via dynamic rebalancing](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sXfWoK4KvSW) | ICLR | 2023 | `CSL`  |  [Official](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FRECORDS-LTPLL)        |\n  | [Delving into semantic scale imbalance](https:\u002F\u002Fopenreview.net\u002Fpdf?id=07tc5kKRIo) | ICLR | 2023 | `CSL`,`RL` |       |\n  | [INPL: Pseudo-labeling the inliers first for imbalanced semi-supervised learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=m6ahb1mpwwX) | ICLR | 2023 | `TL` |       |\n  | [CUDA: Curriculum of data augmentation for long-tailed recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=RgUPdudkWlN) | ICLR | 2023 | `Aug` |  [Official](https:\u002F\u002Fgithub.com\u002FJianhongBai\u002FCOLT)        |\n  | [Long-tailed learning requires feature learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=S-h1oFv-mq) | ICLR | 2023 | `RL`  |     | \n  | [Decoupled training for long-tailed classification with stochastic representations](https:\u002F\u002Fopenreview.net\u002Fpdf?id=bcYZwYo-0t) | ICLR | 2023 | `RL`,`DT` |      |\n\n\n\n  ### 2022\n\n  | Title                                                        |  Venue  | Year |       Type       |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: |\n  | [Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=m7CmxlpHTiu) | NeurIPS | 2022 | `CSL`,`Ensemble` |    [Official](https:\u002F\u002Fgithub.com\u002FVanint\u002FSADE-AgnosticLT)     |\n  | [SoLar: Sinkhorn label refinery for imbalanced partial-label learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=wUUutywJY6) | NeurIPS | 2022 | `CSL` |    [Official](https:\u002F\u002Fgithub.com\u002Fhbzju\u002FSoLar)     |\n  | [Do we really need a learnable classifier at the end of deep neural network?](https:\u002F\u002Fopenreview.net\u002Fpdf?id=A6EmxI3_Xc) | NeurIPS | 2022 |    `RL`,`CD`     |                                                              |\n  | [Maximum class separation as inductive bias in one matrix](https:\u002F\u002Fopenreview.net\u002Fpdf?id=MbVS6BuJ3ql) | NeurIPS | 2022 |   `CD`     |         [Official](https:\u002F\u002Fgithub.com\u002Ftkasarla\u002Fmax-separation-as-inductive-bias)     |                                                       |\n  | [Escaping saddle points for effective generalization on class-imbalanced data](https:\u002F\u002Fopenreview.net\u002Fpdf?id=9DYKrsFSU2) | NeurIPS | 2022 | `other` |    [Official](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002FSaddle-LongTail)     |                                           |\n  | [Breadcrumbs: Adversarial class-balanced sampling for long-tailed recognition](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840628.pdf) | ECCV | 2022 | `Sampling`,`Aug`,`DT` |    [Official](https:\u002F\u002Fgithub.com\u002FBoLiu-SVCL\u002FBreadcrumbs)     |\n  | [Constructing balance from imbalance for long-tailed image recognition](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136800036.pdf) | ECCV | 2022 | `Sampling`,`RL` |    [Official](https:\u002F\u002Fgithub.com\u002Fsilicx\u002FDLSA)     |\n  | [Tackling long-tailed category distribution under domain shifts](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136830706.pdf) | ECCV | 2022 | `CSL`,`Aug`,`RL` |    [Official](https:\u002F\u002Fgithub.com\u002Fguxiao0822\u002Flt-ds)     |\n  | [Improving GANs for long-tailed data through group spectral regularization](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136750423.pdf) | ECCV | 2022 | `CSL`,`Other` |    [Official](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002FgSRGAN)     |\n  | [Learning class-wise visual-linguistic representation for long-tailed visual recognition](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136850072.pdf) | ECCV | 2022 | `TL`,`RL` |    [Official](https:\u002F\u002Fgithub.com\u002FChangyaoTian\u002FVL-LTR)     |\n  | [Learning with free object segments for long-tailed instance segmentation](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136700648.pdf) | ECCV | 2022 | `Aug` |       |\n  | [SAFA: Sample-adaptive feature augmentation for long-tailed image classification](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840578.pdf) | ECCV | 2022 | `Aug`,`RL` |    | \n  | [On multi-domain long-tailed recognition, imbalanced domain generalization, and beyond](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136800054.pdf) | ECCV | 2022 | `RL`|    [Official](https:\u002F\u002Fgithub.com\u002FYyzHarry\u002Fmulti-domain-imbalance)     | \n  | [Invariant feature learning for generalized long-tailed classification](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840698.pdf) | ECCV | 2022 | `RL`|    [Official](https:\u002F\u002Fgithub.com\u002FKaihuaTang\u002FGeneralized-Long-Tailed-Benchmarks.pytorch)     |\n  | [Towards calibrated hyper-sphere representation via distribution overlap coefficient for long-tailed learning](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840176.pdf) | ECCV | 2022 | `RL`,`CD` |    [Official](https:\u002F\u002Fgithub.com\u002FSiLangWHL\u002FvMF-OP)     |\n  | [Long-tailed instance segmentation using Gumbel optimized loss](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136700349.pdf) | ECCV | 2022 | `CD` |    [Official](https:\u002F\u002Fgithub.com\u002Fkostas1515\u002FGOL)     |\n  | [Long-tailed class incremental learning](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136930486.pdf) | ECCV | 2022 | `DT` |    [Official](https:\u002F\u002Fgithub.com\u002Fxialeiliu\u002FLong-Tailed-CIL)     |\n  | [Identifying hard noise in long-tailed sample distribution](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136860725.pdf) | ECCV | 2022 | `Other` |    [Official](https:\u002F\u002Fgithub.com\u002Fyxymessi\u002FH2E-Framework)     |\n  | [Relieving long-tailed instance segmentation via pairwise class balance](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.02784.pdf) |  CVPR   | 2022 |      `CSL`       |      [Official](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPCB)      |\n  | [The majority can help the minority: Context-rich minority oversampling for long-tailed classification](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.00412.pdf) |  CVPR   | 2022 |    `TL`,`Aug`    |         [Official](https:\u002F\u002Fgithub.com\u002Fnaver-ai\u002Fcmo)          |\n  | [Long-tail recognition via compositional knowledge transfer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.06741.pdf) |  CVPR   | 2022 |    `TL`,`RL`     |                                                              |\n  | [BatchFormer: Learning to explore sample relationships for robust representation learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.01522.pdf) |  CVPR   | 2022 |    `TL`,`RL`     |      [Official](https:\u002F\u002Fgithub.com\u002Fzhihou7\u002FBatchFormer)      |\n  | [Nested collaborative learning for long-tailed visual recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.15359.pdf) |  CVPR   | 2022 | `RL`,`Ensemble`  |        [Official](https:\u002F\u002Fgithub.com\u002FBazinga699\u002FNCL)         |\n  | [Long-tailed recognition via weight balancing](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.14197.pdf) |  CVPR   | 2022 |       `DT`       | [Official](https:\u002F\u002Fgithub.com\u002FShadeAlsha\u002FLTR-weight-balancing) |\n  | [Class-balanced pixel-level self-labeling for domain adaptive semantic segmentation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.09744.pdf) |  CVPR   | 2022 |     `other`      |          [Official](https:\u002F\u002Fgithub.com\u002Flslrh\u002FCPSL)           |\n  | [Killing two birds with one stone: Efficient and robust training of face recognition CNNs by partial FC](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.15565.pdf) |  CVPR   | 2022 |     `other`      | [Official](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface\u002Ftree\u002Fmaster\u002Frecognition) |\n  | [Optimal transport for long-tailed recognition with learnable cost matrix](https:\u002F\u002Fopenreview.net\u002Fpdf?id=t98k9ePQQpn) |  ICLR   | 2022 |       `LA`       |                                                              |\n  | [Do deep networks transfer invariances across classes?](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Fn7i_r5rR0q) |  ICLR   | 2022 |    `TL`,`Aug`    | [Official](https:\u002F\u002Fgithub.com\u002FAllanYangZhou\u002Fgenerative-invariance-transfer) |\n  | [Self-supervised learning is more robust to dataset imbalance](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4AZz9osqrar) |  ICLR   | 2022 |       `RL`       |                                                              |\n\n  ### 2021\n\n  | Title                                                        |  Venue  | Year |             Type             |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------------------: | :----------------------------------------------------------: |\n  | [Improving contrastive learning on imbalanced seed data via open-world sampling](https:\u002F\u002Fopenreview.net\u002Fpdf?id=EIfV-XAggKo) | NeurIPS | 2021 |    `Sampling`,`TL`, `DC`     |        [Official](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FMAK)         |\n  | [Semi-supervised semantic segmentation via adaptive equalization learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb98249b38337c5088bbc660d8f872d6a-Paper.pdf) | NeurIPS | 2021 | `Sampling`,`CSL`,`TL`, `Aug` |      [Official](https:\u002F\u002Fgithub.com\u002Fhzhupku\u002FSemiSeg-AEL)      |\n  | [On model calibration for long-tailed object detection and instance segmentation](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F14ad095ecc1c3e1b87f3c522836e9158-Paper.pdf) | NeurIPS | 2021 |             `LA`             |         [Official](https:\u002F\u002Fgithub.com\u002Ftydpan\u002FNorCal)         |\n  | [Label-imbalanced and group-sensitive classification under overparameterization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=UZm2IQhgIyB) | NeurIPS | 2021 |             `LA`             |                                                              |\n  | [Towards calibrated model for long-tailed visual recognition from prior perspective](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Ffile\u002F39ae2ed11b14a4ccb41d35e9d1ba5d11-Paper.pdf) | NeurIPS | 2021 |         `Aug`, `RL`          |     [Official](https:\u002F\u002Fgithub.com\u002FXuZhengzhuo\u002FPrior-LT)      |\n  | [Supercharging imbalanced data learning with energy-based contrastive representation transfer](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb151ce4935a3c2807e1dd9963eda16d8-Paper.pdf) | NeurIPS | 2021 |      `Aug`, `TL`, `RL`       |         [Official](https:\u002F\u002Fgithub.com\u002FZidiXiu\u002FECRT)          |\n  | [VideoLT: Large-scale long-tailed video recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.02668.pdf) |  ICCV   | 2021 |          `Sampling`          |       [Official](https:\u002F\u002Fgithub.com\u002F17Skye17\u002FVideoLT)        |\n  | [Exploring classification equilibrium in long-tailed object detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.07507.pdf) |  ICCV   | 2021 |       `Sampling`,`CSL`       |          [Official](https:\u002F\u002Fgithub.com\u002Ffcjian\u002FLOCE)          |\n  | [GistNet: a geometric structure transfer network for long-tailed recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.00131.pdf) |  ICCV   | 2021 |    `Sampling`,`TL`, `DC`     |                                                              |\n  | [FASA: Feature augmentation and sampling adaptation for long-tailed instance segmentation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.12867.pdf) |  ICCV   | 2021 |       `Sampling`,`CSL`       |                                                              |\n  | [ACE: Ally complementary experts for solving long-tailed recognition in one-shot](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.02385.pdf) |  ICCV   | 2021 |    `Sampling`,`Ensemble`     | [Official](https:\u002F\u002Fgithub.com\u002Fjrcai\u002FACE?utm_source=catalyzex.com) |\n  | [Influence-Balanced Loss for Imbalanced Visual Classification](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.02444.pdf) |  ICCV   | 2021 |            `CSL`             |        [Official](https:\u002F\u002Fgithub.com\u002Fpseulki\u002FIB-Loss)        |\n  | [Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.11279.pdf) |  ICCV   | 2021 |             `TL`             |         [Official](https:\u002F\u002Fgithub.com\u002FCVMI-Lab\u002FDARS)         |\n  | [Self supervision to distillation for long-tailed visual recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.04075.pdf) |  ICCV   | 2021 |             `TL`             |        [Official](https:\u002F\u002Fgithub.com\u002FMCG-NJU\u002FSSD-LT)         |\n  | [Distilling virtual examples for long-tailed recognition](https:\u002F\u002Fcs.nju.edu.cn\u002Fwujx\u002Fpaper\u002FICCV2021_DiVE.pdf) |  ICCV   | 2021 |             `TL`             |                                                              |\n  | [MosaicOS: A simple and effective use of object-centric images for long-tailed object detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.08884.pdf) |  ICCV   | 2021 |             `TL`             |     [Official](https:\u002F\u002Fgithub.com\u002Fczhang0528\u002FMosaicOS\u002F)      |\n  | [Parametric contrastive learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.12028.pdf) |  ICCV   | 2021 |             `RL`             | [Official](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FParametric-Contrastive-Learning) |\n  | [Distributional robustness loss for long-tail learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.03066.pdf) |  ICCV   | 2021 |             `RL`             |       [Official](https:\u002F\u002Fgithub.com\u002Fdvirsamuel\u002FDRO-LT)       |\n  | [Learning of visual relations: The devil is in the tails](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.09668.pdf) |  ICCV   | 2021 |             `DT`             |                                                              |\n  | [Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.05702.pdf) |  ICML   | 2021 |          `Sampling`          |          [Official](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FRIO)           |\n  | [Self-Damaging Contrastive Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.02990.pdf) |  ICML   | 2021 |          `TL`,`RL`           |       [Official](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FSDCLR)        |\n  | [Delving into deep imbalanced regression](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.09554.pdf) |  ICML   | 2021 |           `Other`            | [Official](https:\u002F\u002Fgithub.com\u002FYyzHarry\u002Fimbalanced-regression) |\n  | [Long-tailed multi-label visual recognition by collaborative training on uniform and re-balanced samplings](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FGuo_Long-Tailed_Multi-Label_Visual_Recognition_by_Collaborative_Training_on_Uniform_and_CVPR_2021_paper.pdf) |  CVPR   | 2021 |    `Sampling`,`Ensemble`     |                                                              |\n  | [Equalization loss v2: A new gradient balance approach for long-tailed object detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FTan_Equalization_Loss_v2_A_New_Gradient_Balance_Approach_for_Long-Tailed_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |       [Official](https:\u002F\u002Fgithub.com\u002Ftztztztztz\u002Feqlv2)        |\n  | [Seesaw loss for long-tailed instance segmentation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Seesaw_Loss_for_Long-Tailed_Instance_Segmentation_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |    [Official](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)     |\n  | [Adaptive class suppression loss for long-tail object detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Adaptive_Class_Suppression_Loss_for_Long-Tail_Object_Detection_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |      [Official](https:\u002F\u002Fgithub.com\u002FCASIA-IVA-Lab\u002FACSL)       |\n  | [PML: Progressive margin loss for long-tailed age classification](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDeng_PML_Progressive_Margin_Loss_for_Long-Tailed_Age_Classification_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |                                                              |\n  | [Disentangling label distribution for long-tailed visual recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FHong_Disentangling_Label_Distribution_for_Long-Tailed_Visual_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |          `CSL`,`LA`          |       [Official](https:\u002F\u002Fgithub.com\u002Fhyperconnect\u002FLADE)       |\n  | [Adversarial robustness under long-tailed distribution](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWu_Adversarial_Robustness_Under_Long-Tailed_Distribution_CVPR_2021_paper.pdf) |  CVPR   | 2021 |       `CSL`,`LA`,`CD`        | [Official](https:\u002F\u002Fgithub.com\u002Fwutong16\u002FAdversarial_Long-Tail) |\n  | [Distribution alignment: A unified framework for long-tail visual recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Distribution_Alignment_A_Unified_Framework_for_Long-Tail_Visual_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |       `CSL`,`LA`,`DT`        | [Official](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FDisAlign) |\n  | [Improving calibration for long-tailed recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhong_Improving_Calibration_for_Long-Tailed_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |       `CSL`,`Aug`,`DT`       |     [Official](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMiSLAS)     |\n  | [CReST: A class-rebalancing self-training framework for imbalanced semi-supervised learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWei_CReST_A_Class-Rebalancing_Self-Training_Framework_for_Imbalanced_Semi-Supervised_Learning_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `TL`             |     [Official](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fcrest)     |\n  | [Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChangpinyo_Conceptual_12M_Pushing_Web-Scale_Image-Text_Pre-Training_To_Recognize_Long-Tail_Visual_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `TL`             | [Official](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002Fconceptual-12m) |\n  | [RSG: A simple but effective module for learning imbalanced datasets](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fsupplemental\u002FWang_RSG_A_Simple_CVPR_2021_supplemental.pdf) |  CVPR   | 2021 |          `TL`,`Aug`          |        [Official](https:\u002F\u002Fgithub.com\u002FJianf-Wang\u002FRSG)         |\n  | [MetaSAug: Meta semantic augmentation for long-tailed visual recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_MetaSAug_Meta_Semantic_Augmentation_for_Long-Tailed_Visual_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `Aug`             |        [Official](https:\u002F\u002Fgithub.com\u002FBIT-DA\u002FMetaSAug)        |\n  | [Contrastive learning based hybrid networks for long-tailed image classification](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Contrastive_Learning_Based_Hybrid_Networks_for_Long-Tailed_Image_Classification_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `RL`             |                                                              |\n  | [Unsupervised discovery of the long-tail in instance segmentation using hierarchical self-supervision](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWeng_Unsupervised_Discovery_of_the_Long-Tail_in_Instance_Segmentation_Using_Hierarchical_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `RL`             |                                                              |\n  | [Long-tail learning via logit adjustment](https:\u002F\u002Fopenreview.net\u002Fpdf?id=37nvvqkCo5) |  ICLR   | 2021 |             `LA`             | [Official](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Flogit_adjustment) |\n  | [Long-tailed recognition by routing diverse distribution-aware experts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=D9I3drBz4UC) |  ICLR   | 2021 |       `TL`,`Ensemble`        | [Official](https:\u002F\u002Fgithub.com\u002Ffrank-xwang\u002FRIDE-LongTailRecognition) |\n  | [Exploring balanced feature spaces for representation learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=OqtLIabPTit) |  ICLR   | 2021 |          `RL`,`DT`           |                                                              |\n\n  ### 2020\n\n  | Title                                                        |  Venue  | Year |              Type               |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :-----------------------------: | :----------------------------------------------------------: |\n  | [Balanced meta-softmax for long-taield visual recognition](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F2ba61cc3a8f44143e1f2f13b2b729ab3-Paper.pdf) | NeurIPS | 2020 |        `Sampling`,`CSL`         | [Official](https:\u002F\u002Fgithub.com\u002Fjiawei-ren\u002FBalancedMetaSoftmax) |\n  | [Posterior recalibration for imbalanced datasets](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F5ca359ab1e9e3b9c478459944a2d9ca5-Paper.pdf) | NeurIPS | 2020 |              `LA`               |        [Official](https:\u002F\u002Fgithub.com\u002FGT-RIPL\u002FUNO-IC)         |\n  | [Long-tailed classification by keeping the good and removing the bad momentum causal effect](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F1091660f3dff84fd648efe31391c5524-Paper.pdf) | NeurIPS | 2020 |            `LA`,`CD`            | [Official](https:\u002F\u002Fgithub.com\u002FKaihuaTang\u002FLong-Tailed-Recognition.pytorch) |\n  | [Rethinking the value of labels for improving classimbalanced learning](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fe025b6279c1b88d3ec0eca6fcb6e6280-Paper.pdf) | NeurIPS | 2020 |            `TL`,`RL`            | [Official](https:\u002F\u002Fgithub.com\u002FYyzHarry\u002Fimbalanced-semi-self) |\n  | [The devil is in classification: A simple framework for long-tail instance segmentation](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123590715.pdf) |  ECCV   | 2020 |   `Sampling`,`DT`,`Ensemble`    |        [Official](https:\u002F\u002Fgithub.com\u002Ftwangnh\u002FSimCal)         |\n  | [Imbalanced continual learning with partitioning reservoir sampling](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123580409.pdf) |  ECCV   | 2020 |           `Sampling`            |          [Official](https:\u002F\u002Fgithub.com\u002Fcdjkim\u002FPRS)           |\n  | [Distribution-balanced loss for multi-label classification in long-tailed datasets](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123490154.pdf) |  ECCV   | 2020 |              `CSL`              | [Official](https:\u002F\u002Fgithub.com\u002Fwutong16\u002FDistributionBalancedLoss) |\n  | [Feature space augmentation for long-tailed data](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123740681.pdf) |  ECCV   | 2020 |         `TL`,`Aug`,`DT`         |                                                              |\n  | [Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123500239.pdf) |  ECCV   | 2020 |         `TL`,`Ensemble`         |        [Official](https:\u002F\u002Fgithub.com\u002Fxiangly55\u002FLFME)         |\n  | [Solving long-tailed recognition with deep realistic taxonomic classifier](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123530171.pdf) |  ECCV   | 2020 |              `CD`               |       [Official](https:\u002F\u002Fgithub.com\u002Fgina9726\u002FDeep-RTC)       |\n  | [Learning to segment the tail](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FHu_Learning_to_Segment_the_Tail_CVPR_2020_paper.pdf) |  CVPR   | 2020 |         `Sampling`,`TL`         |     [Official](https:\u002F\u002Fgithub.com\u002FJoyHuYY1412\u002FLST_LVIS)      |\n  | [BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhou_BBN_Bilateral-Branch_Network_With_Cumulative_Learning_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.pdf) |  CVPR   | 2020 |      `Sampling`,`Ensemble`      |      [Official](https:\u002F\u002Fgithub.com\u002FMegvii-Nanjing\u002FBBN)       |\n  | [Overcoming classifier imbalance for long-tail object detection with balanced group softmax](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLi_Overcoming_Classifier_Imbalance_for_Long-Tail_Object_Detection_With_Balanced_Group_CVPR_2020_paper.pdf) |  CVPR   | 2020 |      `Sampling`,`Ensemble`      | [Official](https:\u002F\u002Fgithub.com\u002FFishYuLi\u002FBalancedGroupSoftmax) |\n  | [Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJamal_Rethinking_Class-Balanced_Methods_for_Long-Tailed_Visual_Recognition_From_a_Domain_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `CSL`              |   [Official](https:\u002F\u002Fgithub.com\u002Fabdullahjamal\u002FLongtail_DA)   |\n  | [Equalization loss for long-tailed object recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FTan_Equalization_Loss_for_Long-Tailed_Object_Recognition_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `CSL`              |       [Official](https:\u002F\u002Fgithub.com\u002Ftztztztztz\u002Feqlv2)        |\n  | [Domain balancing: Face recognition on long-tailed domains](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FCao_Domain_Balancing_Face_Recognition_on_Long-Tailed_Domains_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `CSL`              |                                                              |\n  | [M2m: Imbalanced classification via majorto-minor translation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FKim_M2m_Imbalanced_Classification_via_Major-to-Minor_Translation_CVPR_2020_paper.pdf) |  CVPR   | 2020 |           `TL`,`Aug`            |          [Official](https:\u002F\u002Fgithub.com\u002Falinlab\u002FM2m)          |\n  | [Deep representation learning on long-tailed data: A learnable embedding augmentation perspective](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Representation_Learning_on_Long-Tailed_Data_A_Learnable_Embedding_Augmentation_CVPR_2020_paper.pdf) |  CVPR   | 2020 |         `TL`,`Aug`,`RL`         |                                                              |\n  | [Inflated episodic memory with region self-attention for long-tailed visual recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhu_Inflated_Episodic_Memory_With_Region_Self-Attention_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `RL`               |                                                              |\n  | [Decoupling representation and classifier for long-tailed recognition](https:\u002F\u002Fopenreview.net\u002Fpdf?id=r1gRTCVFvB) |  ICLR   | 2020 | `Sampling`,`CSL`,`RL`,`CD`,`DT` | [Official](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fclassifier-balancing) |\n\n\n  ### 2019\n\n  | Title                                                        |  Venue  | Year |    Type    |                             Code                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------: | :----------------------------------------------------------: |\n  | [Meta-weight-net: Learning an explicit mapping for sample weighting](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002Fe58cc5ca94270acaceed13bc82dfedf7-Paper.pdf) | NeurIPS | 2019 |   `CSL`    |  [Official](https:\u002F\u002Fgithub.com\u002Fxjtushujun\u002Fmeta-weight-net)   |\n  | [Learning imbalanced datasets with label-distribution-aware margin loss](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F621461af90cadfdaf0e8d4cc25129f91-Paper.pdf) | NeurIPS | 2019 |   `CSL`    |        [Official](https:\u002F\u002Fgithub.com\u002Fkaidic\u002FLDAM-DRW)        |\n  | [Dynamic curriculum learning for imbalanced data classification](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Dynamic_Curriculum_Learning_for_Imbalanced_Data_Classification_ICCV_2019_paper.pdf) |  ICCV   | 2019 | `Sampling` |                                                              |\n  | [Class-balanced loss based on effective number of samples](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FCui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf) |  CVPR   | 2019 |   `CSL`    | [Official](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fclass-balanced-loss) |\n  | [Striking the right balance with uncertainty](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKhan_Striking_the_Right_Balance_With_Uncertainty_CVPR_2019_paper.pdf) |  CVPR   | 2019 |   `CSL`    |                                                              |\n  | [Feature transfer learning for face recognition with under-represented data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FYin_Feature_Transfer_Learning_for_Face_Recognition_With_Under-Represented_Data_CVPR_2019_paper.pdf) |  CVPR   | 2019 | `TL`,`Aug` |                                                              |\n  | [Unequal-training for deep face recognition with long-tailed noisy data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhong_Unequal-Training_for_Deep_Face_Recognition_With_Long-Tailed_Noisy_Data_CVPR_2019_paper.pdf) |  CVPR   | 2019 |    `RL`    | [Official](https:\u002F\u002Fgithub.com\u002Fzhongyy\u002FUnequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data) |\n  | [Large-scale long-tailed recognition in an open world](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Large-Scale_Long-Tailed_Recognition_in_an_Open_World_CVPR_2019_paper.pdf) |  CVPR   | 2019 |    `RL`    | [Official](https:\u002F\u002Fgithub.com\u002Fzhmiao\u002FOpenLongTailRecognition-OLTR) |\n\n  ### 2018\n\n  | Title                                                        | Venue | Year | Type |                             Code                             |\n  | :----------------------------------------------------------- | :---: | :--: | :--: | :----------------------------------------------------------: |\n  | [Large scale fine-grained categorization and domain-specific transfer learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCui_Large_Scale_Fine-Grained_CVPR_2018_paper.pdf) | CVPR  | 2018 | `TL` | [Official](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fcvpr18-inaturalist-transfer) |\n\n  ### 2017\n\n  | Title                                                        |  Venue  | Year | Type  | Code |\n  | :----------------------------------------------------------- | :-----: | :--: | :---: | :--: |\n  | [Learning to model the tail](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Ffile\u002F147ebe637038ca50a1265abac8dea181-Paper.pdf) | NeurIPS | 2017 | `CSL` |      |\n  | [Focal loss for dense object detection](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLin_Focal_Loss_for_ICCV_2017_paper.pdf) |  ICCV   | 2017 | `CSL` |      |\n  | [Range loss for deep face recognition with long-tailed training data](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Range_Loss_for_ICCV_2017_paper.pdf) |  ICCV   | 2017 | `RL`  |      |\n  | [Class rectification hard mining for imbalanced deep learning](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FDong_Class_Rectification_Hard_ICCV_2017_paper.pdf) |  ICCV   | 2017 | `RL`  |      |\n\n  ### 2016\n\n  | Title                                                        | Venue | Year |      Type       | Code |\n  | :----------------------------------------------------------- | :---: | :--: | :-------------: | :--: |\n  | [Learning deep representation for imbalanced classification](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FHuang_Learning_Deep_Representation_CVPR_2016_paper.pdf) | CVPR  | 2016 | `Sampling`,`RL` |      |\n  | [Factors in finetuning deep model for object detection with long-tail distribution](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FOuyang_Factors_in_Finetuning_CVPR_2016_paper.pdf) | CVPR  | 2016 |   `CSL`,`RL`    |      |\n\n  ## 3. Benchmark Datasets\n\n  | Dataset          |      Long-tailed Task      | # Class | # Training data | # Test data |\n  | :--------------- | :------------------------: | :-----: | :-------------: | :---------: |\n  | ImageNet-LT      |       Classification       |  1,000  |     115,846     |   50,000    |\n  | CIFAR100-LT      |       Classification       |   100   |     50,000      |   10,000    |\n  | Places-LT        |       Classification       |   365   |     62,500      |   36,500    |\n  | iNaturalist 2018 |       Classification       |  8,142  |     437,513     |   24,426    |\n  | LVIS v0.5        | Detection and Segmentation |  1,230  |     57,000      |   20,000    |\n  | LVIS v1          | Detection and Segmentation |  1,203  |     100,000     |   19,800    |\n  | VOC-LT           | Multi-label Classification |   20    |      1,142      |    4,952    |\n  | COCO-LT          | Multi-label Classification |   80    |      1,909      |    5,000    |\n  | VideoLT          |    Video Classification    |  1,004  |     179,352     |   25,622    |\n\n  ## 4. Our codebase\n\n  * To use our codebase, please install requirements: \n    ```\n    pip install -r requirements.txt\n    ```\n  * Hardware requirements: 4 GPUs with >= 23G GPU RAM are recommended.  \n  * ImageNet-LT dataset: please download ImageNet-1K dataset, and put it to the .\u002Fdata file.\n    ```\n    data\n    └──ImageNet\n        ├── train\n        └── val\n    ```\n  * Softmax:\n    ```\n    cd .\u002FMain-codebase \n    Training: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fce.yaml  --exp_name imagenet\u002FCE  --gpu 0,1,2,3 \n    ```\n  * Weighted Softmax:\n    ```\n    cd .\u002FMain-codebase \n    Training: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fweighted_ce.yaml  --exp_name imagenet\u002Fweighted_ce  --gpu 0,1,2,3\n    ```\n  * ESQL (Equalization loss):\n    ```\n    cd .\u002FMain-codebase \n    Training: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fseql.yaml  --exp_name imagenet\u002Fseql  --gpu 0,1,2,3\n    ```\n  * Balanced Softmax:\n    ```\n    cd .\u002FMain-codebase \n    Training: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fbalanced_softmax.yaml  --exp_name imagenet\u002FBS  --gpu 0,1,2,3\n    ```\n  * LADE:\n    ```\n    cd .\u002FMain-codebase \n    Training: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Flade.yaml  --exp_name imagenet\u002FLADE  --gpu 0,1,2,3\n    ```\n  * De-confound (Casual):\n    ```\n    cd .\u002FMain-codebase \n    Training: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fcausal.yaml  --exp_name imagenet\u002Fcausal --remine_lambda 0.1 --alpha 0.005 --gpu 0,1,2,3\n    ```\n  * Decouple (IB-CRT):\n    ```\n    cd .\u002FMain-codebase \n    Training stage 1: python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fce.yaml  --exp_name imagenet\u002FCE  --gpu 0,1,2,3 \n    Training stage 2: python3  main.py --cfg .\u002Fconfig\u002FImageNet_LT\u002Fcls_crt.yaml --model_dir exp_results\u002Fimagenet\u002FCE\u002Ffinal_model_checkpoint.pth  --gpu 0,1,2,3 \n    ```\n  * MiSLAS:\n    ```\n    cd .\u002FMiSLAS-codebase\n    Training stage 1: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train_stage1.py --cfg config\u002Fimagenet\u002Fimagenet_resnext50_stage1_mixup.yaml\n    Training stage 2: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train_stage2.py --cfg config\u002Fimagenet\u002Fimagenet_resnext50_stage2_mislas.yaml resume checkpoint_path\n    Evalutation: CUDA_VISIBLE_DEVICES=0  python3 eval.py --cfg .\u002Fconfig\u002Fimagenet\u002Fimagenet_resnext50_stage2_mislas.yaml  resume checkpoint_path_stage2\n    ```\n  * RSG:\n    ```\n    cd .\u002FRSG-codebase\n    Training: python3 imagenet_lt_train.py \n    Evalutation: python3 imagenet_lt_test.py \n    ```\n  * ResLT:\n    ```\n    cd .\u002FResLT-codebase\n    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 bash sh\u002FX50.sh\n    Evalutation: CUDA_VISIBLE_DEVICES=0 bash sh\u002FX50_eval.sh\n    # The test performance can be found in the log file.\n    ```\n  * PaCo:\n    ```\n    cd .\u002FPaCo-codebase\n    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 bash sh\u002FImageNetLT_train_X50.sh\n    Evalutation: CUDA_VISIBLE_DEVICES=0 bash sh\u002FImageNetLT_eval_X50.sh\n    # The test performance can be found in the log file.\n    ```\n  * LDAM:\n    ```\n    cd .\u002FEnsemble-codebase \n    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c .\u002Fconfigs\u002Fconfig_imagenet_lt_resnext50_ldam.json\n    Evalutation: CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path\n    ```\n  * RIDE:\n    ```\n    cd .\u002FEnsemble-codebase \n    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c .\u002Fconfigs\u002Fconfig_imagenet_lt_resnext50_ride.json\n    Evalutation: CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path\n    ```\n  * SADE:\n    ```\n    cd .\u002FEnsemble-codebase \n    Training: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c .\u002Fconfigs\u002Fconfig_imagenet_lt_resnext50_sade.json\n    Evalutation: CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path\n    ```\n\n  ## 5. Empirical Studies\n\n  ### (1) Long-tailed benchmarking performance\n\n  * We evaluate several state-of-the-art methods on ImageNet-LT to see to what extent they handle class imbalance via new evaluation metrics, i.e., UA (upper bound accuracy) and RA (relative accuracy). We categorize these methods based on class re-balancing (CR), information augmentation (IA) and module improvement (MI). \n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_5e127bd8ee68.png\" width=900>\n  \u003C\u002Fp>\n  \n\n  * Almost all long-tailed methods perform better than the Softmax baseline in terms of accuracy, which demonstrates the effectiveness of long-tailed learning. \n  * Training with 200 epochs leads to better performance for most long-tailed methods, since sufficient  training enables deep models to fit data better and learn better image representations.\n  * In addition to accuracy, we also evaluate long-tailed  methods based on UA and RA. For the methods that have higher  UA, the performance gain  comes  not only  from the alleviation of class imbalance, but also from other factors, like data augmentation or better network architectures. Therefore, simply using accuracy for evaluation is not accurate enough, while our proposed RA metric provides a good complement, since it alleviates the influences of factors apart from class imbalance. \n  * For example, MiSLAS, based on data mixup, has higher accuracy than Balanced Sofmtax under 90 training epochs, but it also has higher UA. As a result, the relative accuracy of  MiSLAS is lower than Balanced Sofmtax, which means that Balanced Sofmtax alleviates class imbalance better than MiSLAS under 90 training epochs.\n  * Although some recent high-accuracy methods   have  lower RA, the overall development trend of long-tailed learning is still positive, as shown in the below figure.\n\n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_2f21959c8e4a.png\" width=900>\n  \u003C\u002Fp>\n  \n\n  * The current state-of-the-art long-tailed method in terms of both accuracy and RA is SADE (ensemble-based method). \n\n  ### (2) More discussions on cost-sensitive losses\n\n  * We further evaluate the performance of different cost-sensitive learning losses based on the decoupled training scheme.\n  * Decoupled training, compared to joint training, can further improve  the   overall performance  of most cost-sensitive learning methods apart from balanced softmax (BS).\n  * Although BS outperofmrs other cost-sensitive losses under one-stage training, they perform comparably under decoupled training. This implies that although these cost-sensitive losses perform differently under joint training, they essentially learn similar  quality of feature representations. \n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_29fd15ef3f2c.png\" width=500>\n  \u003C\u002Fp>\n  \n\n\n  ## 5. Citation\n\n  If this repository is helpful to you, please cite our survey.\n\n  ```\n  @article{zhang2023deep,\n        title={Deep long-tailed learning: A survey},\n        author={Zhang, Yifan and Kang, Bingyi and Hooi, Bryan and Yan, Shuicheng and Feng, Jiashi},\n        journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n        year={2023},\n        publisher={IEEE}\n  }\n  ```\n\n  ## 5. Other Resources\n\n  - [Papers With Code: Long-tailed Learning](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Flong-tail-learning)\n  - [zzw-zwzhang\u002FAwesome-of-Long-Tailed-Recognition](https:\u002F\u002Fgithub.com\u002Fzzw-zwzhang\u002FAwesome-of-Long-Tailed-Recognition)\n  - [SADE\u002FTest-Agnostic Long-Tailed Recognition](https:\u002F\u002Fgithub.com\u002FVanint\u002FSADE-AgnosticLT)\n \n","# 令人惊叹的长尾学习（TPAMI 2023）\n\n  [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n  [![欢迎提交PR](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg?style=flat-square)](http:\u002F\u002Fmakeapullrequest.com)\n\n  我们向社区发布了*[深度长尾学习：综述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04596.pdf)*以及**我们的代码库**。在这篇综述中，我们回顾了基于深度神经网络的长尾学习领域的最新进展。现有的长尾学习研究可以归纳为三大类（即类别重平衡、信息增强和模块改进），而这三大类又可进一步细分为九个小类（如图所示）。此外，我们还通过评估几种最先进方法在多大程度上解决了类别不平衡问题，对其进行了实证分析。最后，我们总结了深度长尾学习的重要应用，并指出了未来研究的若干有前景的方向。\n\n  完成这篇综述后，我们决定公开我们的长尾学习资源和代码库，以期推动该领域的社区发展。如果您有任何问题或建议，请随时与我们联系。\n\n    \n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_89843bf54774.png\" width=1000>\n  \u003C\u002Fp>\n  \n\n  ## 1. 长尾学习的类型\n\n  | 符号 | `采样`  |          `CSL`          |       `LA`       |       `TL`        |       `Aug`       |\n  | :----- | :---------: | :---------------------: | :--------------: | :---------------: | :---------------: |\n  | 类型   | 重采样 | 类别敏感学习 | 激活值调整 | 迁移学习 | 数据增强 |\n\n  | 符号 |          `RL`           |       `CD`        |        `DT`        |    `Ensemble`     |   `other`   |\n  | :----- | :---------------------: | :---------------: | :----------------: | :---------------: | :---------: |\n  | 类型   | 表征学习 | 分类器设计 | 解耦训练 | 集成学习 | 其他类型 |\n\n  ## 2. 顶级会议论文（更新至2025年6月）\n\n\n\n  ### 2025\n\n| 标题                  | 会议名称  | 年份 |       类型       |                             编号                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: |\n  | [有监督探索性学习用于长尾视觉识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FJian_Supervised_Exploratory_Learning_for_Long-Tailed_Visual_Recognition_ICCV_2025_paper.pdf) | ICCV | 2025 |  `采样`,`强化学习`  |       | \n  | [你就是自己最好的老师：在异构且长尾分布的数据下实现中心化水平的联邦学习性能](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FYan_You_Are_Your_Own_Best_Teacher_Achieving_Centralized-level_Performance_in_ICCV_2025_paper.pdf) | ICCV | 2025 |  `本地训练`,`迁移学习`,`强化学习`  |    [官方代码库](https:\u002F\u002Fgithub.com\u002Fshanss132\u002FFedYoYo)        | \n  | [通过语义相关实例增强类别表征，以实现鲁棒的带噪声标签长尾学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FLi_Boosting_Class_Representation_via_Semantically_Related_Instances_for_Robust_Long-Tailed_ICCV_2025_paper.pdf) | ICCV | 2025 |  `迁移学习`,`集成学习`,`其他` |    [官方代码库](https:\u002F\u002Fgithub.com\u002Fyhliml\u002FIBC)        | \n  | [基于多粒度语义的长尾分类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FLiu_Long-Tailed_Classification_with_Multi-Granularity_Semantics_ICCV_2025_paper.pdf) | ICCV | 2025 |  `数据增强`,`强化学习`  |        | \n  | [AMD：自适应动量与解耦对比学习框架，用于鲁棒的长尾轨迹预测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FRao_AMD_Adaptive_Momentum_and_Decoupled_Contrastive_Learning_Framework_for_Robust_ICCV_2025_paper.pdf) | ICCV | 2025 |  `数据增强`,`强化学习`  |       | \n  | [基于可控扩散模型的生成式主动学习用于长尾轨迹预测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FPark_Generative_Active_Learning_for_Long-tail_Trajectory_Prediction_via_Controllable_Diffusion_ICCV_2025_paper.pdf) | ICCV | 2025 |  `数据增强`,`其他`  |        \n  | [面向长尾多标签图像分类的类别特定选择性特征增强](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FDu_Category-Specific_Selective_Feature_Enhancement_for_Long-Tailed_Multi-Label_Image_Classification_ICCV_2025_paper.pdf) | ICCV | 2025 |  `强化学习`  |        | \n  | [克服双重漂移以实现持续的长尾视觉问答](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FZhang_Overcoming_Dual_Drift_for_Continual_Long-Tailed_Visual_Question_Answering_ICCV_2025_paper.pdf) | ICCV | 2025 |  `强化学习`,`持续学习`  |        | \n  | [微小改变，巨大飞跃：基于几何原型对齐的长尾增量学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FLai_A_Tiny_Change_A_Giant_Leap_Long-Tailed_Class-Incremental_Learning_via_ICCV_2025_paper.pdf) | ICCV | 2025 |  `强化学习`,`持续学习`  |    [官方代码库](https:\u002F\u002Fgithub.com\u002Flaixinyi023\u002FGeometric-Prototype-Alignment)        | \n  | [通过类别无关概念迈向长尾在线异常检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2025\u002Fpapers\u002FYang_Toward_Long-Tailed_Online_Anomaly_Detection_through_Class-Agnostic_Concepts_ICCV_2025_paper.pdf) | ICCV | 2025 |  `其他`  |    [官方记录](https:\u002F\u002Fzenodo.org\u002Frecords\u002F16283853)        | \n  | [重新思考基础模型在长尾分布下的偏差](https:\u002F\u002Fopenreview.net\u002Fattachment?id=jSoNlHD9qA&name=pdf) | ICML | 2025 |  `本地训练`  |        | \n  | [利用神经坍缩推进个性化学习以应对长尾挑战](https:\u002F\u002Fopenreview.net\u002Fpdf?id=W7phL2sNif) | ICML | 2025 |  `强化学习` |    [官方代码库](https:\u002F\u002Fgithub.com\u002Fllm4edu\u002FNCAL_ICML2025)    | \n  | [Focal-SAM：面向长尾分类的焦点锐度感知最小化](https:\u002F\u002Fopenreview.net\u002Fattachment?id=lCk4PZto8T&name=pdf) | ICML | 2025 |  `强化学习`  |        | \n  | [方枘圆凿：面向长尾半监督学习的元专家](https:\u002F\u002Fopenreview.net\u002Fpdf?id=h0ZeiDRN8A) | ICML | 2025 |  `强化学习`,`集成学习` |        | \n  | [平衡模型效率与性能：面向长尾数据的自适应剪枝器](https:\u002F\u002Fopenreview.net\u002Fpdf?id=1d1ssNedLv) | ICML | 2025 |  `其他` |    [官方代码库](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002FLT-VOTE)    | \n  | [TailedCore：用于无监督长尾噪声异常检测的小样本采样](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FJung_TailedCore_Few-Shot_Sampling_for_Unsupervised_Long-Tail_Noisy_Anomaly_Detection_CVPR_2025_paper.pdf) | CVPR | 2025 |   `采样` |    [官方代码库](https:\u002F\u002Fgithub.com\u002Fjungyg\u002FTailedCore)    | \n  | [分形校准用于长尾目标检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FAlexandridis_Fractal_Calibration_for_Long-tailed_Object_Detection_CVPR_2025_paper.pdf) | CVPR | 2025 |   `本地训练`,`其他` |    [官方代码库](https:\u002F\u002Fgithub.com\u002Fkostas1515\u002FFRACAL)    | \n  | [SimLTD：简单有监督与半监督长尾目标检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FTran_SimLTD_Simple_Supervised_and_Semi-Supervised_Long-Tailed_Object_Detection_CVPR_2025_paper.pdf) | CVPR | 2025 |   `迁移学习` |       | \n  | [向邻居学习：面向长尾学习的类别外推](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FZhao_Learning_from_Neighbors_Category_Extrapolation_for_Long-Tail_Learning_CVPR_2025_paper.pdf) | CVPR | 2025 |   `迁移学习`,`数据增强`,`强化学习` |       | \n  | [长尾数据集的蒸馏](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FZhao_Distilling_Long-tailed_Datasets_CVPR_2025_paper.pdf) | CVPR | 2025 |   `迁移学习`,`数据蒸馏` |    [官方代码库](https:\u002F\u002Fgithub.com\u002Fichbill\u002FLTDD)    | \n  | [搜索与检测：基于网络图像检索的免训练长尾目标检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FSidhu_Search_and_Detect_Training-Free_Long_Tail_Object_Detection_via_Web-Image_CVPR_2025_paper.pdf) | CVPR | 2025 |   `其他` |    [官方代码库](https:\u002F\u002Fgithub.com\u002FMankeerat\u002FSearchDet)    | \n  | [TAET：针对长尾分布的两阶段对抗均衡训练](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FYu-Hang_TAET_Two-Stage_Adversarial_Equalization_Training_on_Long-Tailed__Distributions_CVPR_2025_paper.pdf) | CVPR | 2025 |   `其他` |    [官方代码库](https:\u002F\u002Fgithub.com\u002FBuhuiOK\u002FTAET-Two-Stage-Adversarial-Equalization-Training-on-Long-Tailed-Distributions)    | \n  | [通过类别信息量追求更好的决策边界，用于长尾目标检测](https:\u002F\u002Fopenreview.net\u002Fpdf?id=LW55JrLYPg) | ICLR | 2025 | `类别敏感学习` |      | \n  | [重新思考长尾识别中的分类器重训练：标签过度平滑可以起到平衡作用](https:\u002F\u002Fopenreview.net\u002Fpdf?id=OeKp3AdiVO) | ICLR | 2025 | `本地训练`,`持续学习` |      | \n  | [带有自我蒸馏的长尾对抗训练](https:\u002F\u002Fopenreview.net\u002Fpdf?id=vM94dZiqx4) | ICLR | 2025 | `迁移学习`,`其他` |      | \n  | [ConMix：面向长尾深度聚类的表示层对比混合](https:\u002F\u002Fopenreview.net\u002Fpdf?id=3lH8WT0fhu) | ICLR | 2025 | `数据增强`,`强化学习` |    [官方代码库](https:\u002F\u002Fgithub.com\u002FLZX-001\u002FConMix)    | \n  | [长尾表示学习的几何学：为偏斜分布重新平衡特征](https:\u002F\u002Fopenreview.net\u002Fpdf?id=GySIAKEwtZ) | ICLR | 2025 | `强化学习`  |      |\n\n### 2024年\n\n| 标题                                                        | 会议地点  | 年份 |       类型       |                             编号                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: |\n  | [驯服人类移动性预测中的长尾分布](https:\u002F\u002Fopenreview.net\u002Fpdf?id=wT2TIfHKp8) | NeurIPS | 2024 | `采样`,`LA` |      | \n  | [长尾目标检测预训练：基于双重重建的动态再平衡对比学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=mGz3Jux9wS) | NeurIPS | 2024 | `采样`,`RL` |      | \n  | [AUCSeg：面向AUC的像素级长尾语义分割](https:\u002F\u002Fopenreview.net\u002Fpdf\u002F38fa941c480de3259c3508aaf0c968eed971b269.pdf) | NeurIPS | 2024 | `采样`,`其他` |   [官方](https:\u002F\u002Fgithub.com\u002Fboyuh\u002FAUCSeg)       | \n  | [用于长尾半监督识别的连续对比学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=PaqJ71zf1M) | NeurIPS | 2024 | `CSL`,`LA`,`RL` |   [官方](https:\u002F\u002Fgithub.com\u002Fzhouzihao11\u002FCCL)       | \n  | [通过归一化异常分布自适应进行长尾分布外检测](https:\u002F\u002Fopenreview.net\u002Fpdf?id=cesWi7mMLY) | NeurIPS | 2024 | `LA`  |   [官方](https:\u002F\u002Fgithub.com\u002Fmala-lab\u002FAdaptOD)       | \n  | [LLM-ESR：针对长尾序列推荐的大语言模型增强](https:\u002F\u002Fopenreview.net\u002Fpdf?id=xojbzSYIVS) | NeurIPS | 2024 | `TL`,`集成` |   [官方](https:\u002F\u002Fgithub.com\u002FApplied-Machine-Learning-Lab\u002FLLM-ESR)      | \n  | [DiffuLT：无需外部知识的长尾识别扩散模型](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Kcsj9FGnKR) | NeurIPS | 2024 | `增广` |      | \n  | [LLM-AutoDA：大语言模型驱动的长尾问题自动数据增强](https:\u002F\u002Fopenreview.net\u002Fpdf?id=VpuOuZOVhP) | NeurIPS | 2024 | `增广` |     [官方](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002FLLM-LT-AUG)       | \n  | [突破长尾学习瓶颈：由超网络生成多样化专家的可控范式](https:\u002F\u002Fopenreview.net\u002Fpdf?id=WpPNVPAEyv) | NeurIPS | 2024 | `集成` |    [官方](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002FPRL)    | \n  | [一次阅读足够：面向长尾领域知识的聚类引导稀疏专家无领域特定预训练语言模型](https:\u002F\u002Fopenreview.net\u002Fpdf?id=manHbkpIW6) | NeurIPS | 2024 | `集成` |        | \n  | [通过高斯邻域最小化改进视觉提示微调以用于长尾视觉识别](https:\u002F\u002Fopenreview.net\u002Fpdf?id=7lMN6xoBjb) | NeurIPS | 2024 | `其他` |   [官方](https:\u002F\u002Fgithub.com\u002FKeke921\u002FGNM-PT)       | \n  | [迈向异构长尾学习：基准测试、指标与工具箱](https:\u002F\u002Fopenreview.net\u002Fpdf?id=plIuBfYpXj) | NeurIPS | 2024 | `其他` |   [官方](https:\u002F\u002Fgithub.com\u002FSSSKJ\u002FHeroLT)       |  \n  | [是什么让CLIP对长尾预训练数据更加鲁棒？一项可迁移见解的受控研究](https:\u002F\u002Fopenreview.net\u002Fpdf?id=PcyioHOmjq) | NeurIPS | 2024 | `其他` |   [官方](https:\u002F\u002Fgithub.com\u002FCVMI-Lab\u002Fclip-beyond-tail)       |  \n  | [灵活的分布对齐：迈向具有适当校准的长尾半监督学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F07132.pdf) | ECCV | 2024 | `LA` |   [官方](https:\u002F\u002Fgithub.com\u002Femasa\u002FADELLO-LTSSL)       |  \n  | [基于分组时序logit调整的长尾时间动作分割](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F04389.pdf) | ECCV | 2024 | `LA` |   [官方](https:\u002F\u002Fgithub.com\u002Fpangzhan27\u002FGTLA)       |  \n  | [带有噪声标签的长尾数据的分布感知鲁棒学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F02177.pdf) | ECCV | 2024 | `增广`,`RL` |   [官方](https:\u002F\u002Fgithub.com\u002FJaesoonBaik1213\u002FDaSC)       |  \n  | [长尾多标签图像分类的分布稳健损失](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F04926.pdf) | ECCV | 2024 | `RL` |   [官方](https:\u002F\u002Fgithub.com\u002FKunmonkey\u002FDR-Loss)       |  \n  | [修正长尾目标检测中的回归偏差](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F04069.pdf) | ECCV | 2024 | `RL`,`CD` |       | \n  | [LTRL：通过反思式学习提升长尾识别能力](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2024\u002Fpapers_ECCV\u002Fpapers\u002F08380.pdf) | ECCV | 2024 | `其他` |   [官方](https:\u002F\u002Fgithub.com\u002Ffistyee\u002FLTRL)       |  \n  | [学习标签移位校正以实现测试无关的长尾识别](https:\u002F\u002Fopenreview.net\u002Fpdf?id=J3xYTh6xtL) | ICML | 2024 | `LA` |   [官方](https:\u002F\u002Fgithub.com\u002FStomach-ache\u002Flabel-shift-correction)      |  \n  | [长尾实例分割的生成式主动学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ofXRBPtol3) | ICML | 2024 | `增广` |    |  \n  | [ELTA：针对美学导向模型的长尾增强器](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dhrNfAJAH6) | ICML | 2024 | `增广` |   [官方](https:\u002F\u002Fgithub.com\u002Fwoshidandan\u002FLong-Tail-image-aesthetics-and-quality-assessment)       |  \n  | [通过神经坍缩优化长尾分类中的分布对齐](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hjwx3H6Vci) | ICML | 2024 | `RL` |    [官方](https:\u002F\u002Fgithub.com\u002FJintongGao\u002FDisA)      | \n  | [使用基础模型进行长尾学习：过度微调有害](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ccSSKTz9LX) | ICML | 2024 | `CD` |    [官方](https:\u002F\u002Fgithub.com\u002Fshijxcs\u002FLIFT)      |  \n  | [SimPro：一个简单的概率框架，用于实现更真实的长尾半监督学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NbOlmrB59Z) | ICML | 2024 | `CD` |    [官方](https:\u002F\u002Fgithub.com\u002FLeapLabTHU\u002FSimPro)      | \n  | [在测试无关的长尾识别中利用层次化标签分布的变化](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ebt5BfRHcW) | ICML | 2024 | `集成` |    [官方](https:\u002F\u002Fgithub.com\u002Fscongl\u002FDirMixE)      |  \n  | [双拳一心：基于多目标优化策略融合的长尾学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=MEZydkOr3l) | ICML | 2024 | `其他` |    [官方](https:\u002F\u002Fgithub.com\u002FDataLab-atom\u002Ftorch-MOOSF)      |  \n  | [BEM：平衡且基于熵的混合方法，用于长尾半监督学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FZheng_BEM_Balanced_and_Entropy-based_Mix_for_Long-Tailed_Semi-Supervised_Learning_CVPR_2024_paper.pdf) | CVPR | 2024 | `CSL`,`TL`,`CD` |  [官方](https:\u002F\u002Fgithub.com\u002FZhenghongwei0929\u002FCVPR2024-BEM)         |\n  | [DeiT-LT：蒸馏反击，用于长尾数据集上的视觉Transformer训练](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FRangwani_DeiT-LT_Distillation_Strikes_Back_for_Vision_Transformer_Training_on_Long-Tailed_CVPR_2024_paper.pdf) | CVPR | 2024 | `CSL`,`TL`,`CD` |  [官方](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002FDeiT-LT)         |\n  | [在长尾分布下重新审视对抗训练](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FYue_Revisiting_Adversarial_Training_Under_Long-Tailed_Distributions_CVPR_2024_paper.pdf) | CVPR | 2024 | `CSL`,`增广` |  [官方](https:\u002F\u002Fgithub.com\u002FNISPLab\u002FAT-BSL)         |\n  | [具有可学习类别名称的长尾异常检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FHo_Long-Tailed_Anomaly_Detection_with_Learnable_Class_Names_CVPR_2024_paper.pdf) | CVPR | 2024 | `TL`,`增广` |        [官方](https:\u002F\u002Fzenodo.org\u002Frecords\u002F10854201)    |  \n  | [LTGC：利用大语言模型生成的内容进行长尾识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FZhao_LTGC_Long-tail_Recognition_via_Leveraging_LLMs-driven_Generated_Content_CVPR_2024_paper.pdf) | CVPR | 2024 | `增广` |   [官方](https:\u002F\u002Fgithub.com\u002Fdialogueeeeee\u002FLTGC)       |  \n  | [深入探讨多目标跟踪中的轨迹长尾分布](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FChen_Delving_into_the_Trajectory_Long-tail_Distribution_for_Muti-object_Tracking_CVPR_2024_paper.pdf) | CVPR | 2024 | `增广`,`CD` |   [官方](https:\u002F\u002Fgithub.com\u002Fchen-si-jia\u002FTrajectory-Long-tail-Distribution-for-MOT)       |  \n  | [通过独立子原型构建进行长尾类别增量学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FWang_Long-Tail_Class_Incremental_Learning_via_Independent_Sub-prototype_Construction_CVPR_2024_paper.pdf) | CVPR | 2024 | `RL` |       |\n  | [具有定向校准的长尾扩散模型](https:\u002F\u002Fopenreview.net\u002Fattachment?id=NW2s5XXwXU&name=pdf) | ICLR | 2024 | `采样`,`CSL`,`TL` |       [官方](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FOC_LT)       |\n  | [一石二鸟：重新思考深度长尾学习的数据增强](https:\u002F\u002Fopenreview.net\u002Fattachment?id=RzY9qQHUXy&name=pdf) | ICLR | 2024 | `增广` |       [官方](https:\u002F\u002Fgithub.com\u002Fpongkun\u002FCode-for-DODA)       |\n  | [FedLoGe：长尾数据下的本地与通用联邦学习联合](https:\u002F\u002Fopenreview.net\u002Fattachment?id=V3j5d0GQgH&name=pdf) | ICLR | 2024 | `RL` |       [官方](https:\u002F\u002Fgithub.com\u002FZackZikaiXiao\u002FFedLoGe)       |\n  | [拒绝学习与长尾学习相遇](https:\u002F\u002Fopenreview.net\u002Fattachment?id=ta26LtNq2r&name=pdf) | ICLR | 2024 | `CD` |      |\n  | [探索长尾识别问题中的权重平衡](https:\u002F\u002Fopenreview.net\u002Fattachment?id=JsnR0YO4Fq&name=pdf) | ICLR | 2024 | `DT` |       [官方](https:\u002F\u002Fgithub.com\u002FHN410\u002FExploring-Weight-Balancing-on-Long-Tailed-Recognition-Problem)       |\n  | [帕累托深度长尾识别：一种冲突回避解决方案](https:\u002F\u002Fopenreview.net\u002Fattachment?id=b66P1u0k15&name=pdf) | ICLR | 2024 | `其他` |       [官方](https:\u002F\u002Fgithub.com\u002Fzzpustc\u002FPLOT)       |\n\n### 2023年\n\n| 标题                                                        | 会议地点  | 年份 |       类型       |                             编码                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: | \n  | [重采样如何助力长尾学习？](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Feeffa70bcbbd43f6bd067edebc6595e8-Paper-Conference.pdf) | NeurIPS | 2023 | `采样`,`增强` |     [官方](https:\u002F\u002Fwww.lamda.nju.edu.cn\u002Fcode_CSA.ashx)        |\n  | [Fed-GraB：基于自适应梯度平衡器的联邦长尾学习](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Ff4b8ddb9b1aa3cb11462d64a70b84db2-Paper-Conference.pdf) | NeurIPS | 2023 | `CSL` |     [官方](https:\u002F\u002Fgithub.com\u002FZackZikaiXiao\u002FFedGraB)        |\n  | [通过混合提升少数类：一种用于长尾分类的自适应最优传输方法](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Fbdabb5d4262bcfb6a1d529d690a6c82b-Paper-Conference.pdf) | NeurIPS | 2023 | `增强` |     [官方](https:\u002F\u002Fgithub.com\u002FJintongGao\u002FEnhancing-Minority-Classes-by-Mixing) |\n  | [从丰富语义和粗略位置中学习以进行长尾目标检测](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Ff5fcd88d3deb97bb62559208cfa0ab62-Paper-Conference.pdf) | NeurIPS | 2023 | `强化学习` |     [官方](https:\u002F\u002Fgithub.com\u002FMengLcool\u002FRichSem)        |\n  | [极端多标签分类中长尾性能的广义测试工具](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002F46994b3d6dd0fd5fca5f780af6259db5-Paper-Conference.pdf) | NeurIPS | 2023 | `其他` |          |\n  | [基于内在长尾数据的标签噪声学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FLu_Label-Noise_Learning_with_Intrinsically_Long-Tailed_Data_ICCV_2023_paper.pdf) | ICCV | 2023 | `采样` |     [官方](https:\u002F\u002Fgithub.com\u002FWakings\u002FTABASCO)        |\n  | [MDCS：通过一致性自蒸馏实现更多样化的专家，用于长尾识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhao_MDCS_More_Diverse_Experts_with_Consistency_Self-distillation_for_Long-tailed_Recognition_ICCV_2023_paper.pdf) | ICCV | 2023 | `采样`,`迁移学习`,`集成` |     [官方](https:\u002F\u002Fgithub.com\u002Ffistyee\u002FMDCS)        |\n  | [子类平衡对比学习用于长尾识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FHou_Subclass-balancing_Contrastive_Learning_for_Long-tailed_Recognition_ICCV_2023_paper.pdf) | ICCV | 2023 | `采样`,`强化学习`|     [官方](https:\u002F\u002Fgithub.com\u002FJackHck\u002FSBCL)        |\n  | [当噪声标签遇上长尾困境：一种表征校准方法](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhang_When_Noisy_Labels_Meet_Long_Tail_Dilemmas_A_Representation_Calibration_ICCV_2023_paper.pdf) | ICCV | 2023 | `采样`,`强化学习`,`决策树`|          |\n  | [AREA：基于有效面积的自适应重加权，用于长尾分类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FChen_AREA_Adaptive_Reweighting_via_Effective_Area_for_Long-Tailed_Classification_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL` |     [官方](https:\u002F\u002Fgithub.com\u002Fxiaohua-chen\u002FAREA)        |\n  | [协调对象级与全局级目标以进行长尾检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhang_Reconciling_Object-Level_and_Global-Level_Objectives_for_Long-Tail_Detection_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL` |     [官方](https:\u002F\u002Fgithub.com\u002FEricZsy\u002FROG)        |\n  | [针对长尾学习的局部与全局 logits 调整](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FTao_Local_and_Global_Logit_Adjustments_for_Long-Tailed_Learning_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL`,`LA`,`集成` |          |\n  | [在不完美环境中学习：具有长尾分布和部分标签的多标签分类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZhang_Learning_in_Imperfect_Environment_Multi-Label_Classification_with_Long-Tailed_Distribution_and_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL`,`TL` |     [官方](https:\u002F\u002Fgithub.com\u002Fwannature\u002FCOMIC)        |\n  | [用于联邦长尾学习的全局均衡专家](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FZeng_Global_Balanced_Experts_for_Federated_Long-Tailed_Learning_ICCV_2023_paper.pdf) | ICCV | 2023 | `CSL`, `集成` |     [官方](https:\u002F\u002Fgithub.com\u002FSpinozaaa\u002FFederated-Long-tailed-Learning)       |\n  | [通过在平滑尾部数据上分步学习来提升长尾目标检测](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FDong_Boosting_Long-tailed_Object_Detection_via_Step-wise_Learning_on_Smooth-tail_Data_ICCV_2023_paper.pdf) | ICCV | 2023 | `集成` |         |\n  | [通过潜在特征与真实标签之间的互信息最大化进行长尾识别](https:\u002F\u002Fopenreview.net\u002Fpdf?id=KqNX6VOqnJ) | ICML | 2023 | `CSL`,`RL` |     [官方](https:\u002F\u002Fgithub.com\u002Fbluecdm\u002FLong-tailed-recognition)        |\n  | [大型语言模型难以学习长尾知识](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sfdKdeczaw) | ICML | 2023 | `增强` |        |\n  | [特征方向很重要：通过旋转平衡表示进行长尾学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=dTgxiMW6wr0) | ICML | 2023 | `强化学习` |        |\n  | [用于长尾视觉识别的包裹式柯西分布角度 softmax](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fhan23a\u002Fhan23a.pdf) | ICML | 2023 | `强化学习`,`CD` |  [官方](https:\u002F\u002Fgithub.com\u002Fboranhan\u002Fwcdas_code)        | \n  | [从长尾分布学习的角度重新思考图像超分辨率](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FGou_Rethinking_Image_Super_Resolution_From_Long-Tailed_Distribution_Learning_Perspective_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL` |      |\n  | [将知识从头部传递到尾部：长尾分布下的不确定性校准](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FChen_Transfer_Knowledge_From_Head_to_Tail_Uncertainty_Calibration_Under_Long-Tailed_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL`,`TL`  |  [官方](https:\u002F\u002Fgithub.com\u002FJiahaoChen1\u002FCalibration)        |\n  | [迈向现实的长尾半监督学习：一致性就是一切](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWei_Towards_Realistic_Long-Tailed_Semi-Supervised_Learning_Consistency_Is_All_You_Need_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL`,`TL`,`集成`  |  [官方](https:\u002F\u002Fgithub.com\u002FGank0078\u002FACR)        |\n  | [用于长尾视觉识别的全局与局部混合一致性累积学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FDu_Global_and_Local_Mixture_Consistency_Cumulative_Learning_for_Long-Tailed_Visual_CVPR_2023_paper.pdf) | CVPR | 2023 | `CSL`,`强化学习` |  [官方](https:\u002F\u002Fgithub.com\u002Fynu-yangpeng\u002FGLMC)        |\n  | [通过自我异质性整合与知识挖掘进行长尾视觉识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FJin_Long-Tailed_Visual_Recognition_via_Self-Heterogeneous_Integration_With_Knowledge_Excavation_CVPR_2023_paper.pdf) | CVPR | 2023 | `TL`,`集成`  |  [官方](https:\u002F\u002Fgithub.com\u002Fjinyan-06\u002FSHIKE)        |\n  | [平衡 logits 变化以进行长尾语义分割](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWang_Balancing_Logit_Variation_for_Long-Tailed_Semantic_Segmentation_CVPR_2023_paper.pdf) | CVPR | 2023 | `增强`  |  [官方](https:\u002F\u002Fgithub.com\u002Fgrantword8\u002FBLV)        |\n  | [动动脑子：改进长尾视频识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FPerrett_Use_Your_Head_Improving_Long-Tail_Video_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `增强`  |  [官方](https:\u002F\u002Fgithub.com\u002Ftobyperrett\u002Flmr)        |\n  | [FCC：用于长尾视觉识别的特征聚类压缩](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FLi_FCC_Feature_Clusters_Compression_for_Long-Tailed_Visual_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `强化学习`  |  [官方](https:\u002F\u002Fgithub.com\u002Flijian16\u002FFCC)        |\n  | [FEND：面向长尾轨迹预测的未来增强型分布感知对比学习框架](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FWang_FEND_A_Future_Enhanced_Distribution-Aware_Contrastive_Learning_Framework_for_Long-Tail_CVPR_2023_paper.pdf) | CVPR | 2023 | `强化学习`  |     | \n  | [SuperDisco：超级类别发现改善长尾视觉识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FDu_SuperDisco_Super-Class_Discovery_Improves_Visual_Recognition_for_the_Long-Tail_CVPR_2023_paper.pdf) | CVPR | 2023 | `强化学习`  |     | \n  | [深度长尾识别中的类别条件锐度感知最小化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FZhou_Class-Conditional_Sharpness-Aware_Minimization_for_Deep_Long-Tailed_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `DT`  |  [官方](https:\u002F\u002Fgithub.com\u002Fzzpustc\u002FCC-SAM)        |\n  | [用于长尾识别的校准专家平衡乘积](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FAimar_Balanced_Product_of_Calibrated_Experts_for_Long-Tailed_Recognition_CVPR_2023_paper.pdf) | CVPR | 2023 | `集成`  |  [官方](https:\u002F\u002Fgithub.com\u002Femasa\u002FBalPoE-CalibratedLT)        |\n  | [不让任何人掉队：改进长尾学习中最差的类别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FDu_No_One_Left_Behind_Improving_the_Worst_Categories_in_Long-Tailed_CVPR_2023_paper.pdf) | CVPR | 2023 | `集成`  |    |\n  | [关于分布外数据在自监督长尾学习中的有效性](https:\u002F\u002Fopenreview.net\u002Fpdf?id=v8JIQdiN9Sh) | ICLR | 2023 | `采样`,`TL`,`增强`  |  [官方](https:\u002F\u002Fgithub.com\u002FJianhongBai\u002FCOLT)        |\n  | [LPT：用于图像分类的长尾提示调优](https:\u002F\u002Fopenreview.net\u002Fpdf?id=8pOVAeo8ie) | ICLR | 2023 | `采样`,`TL`,`其他`  |  [官方](https:\u002F\u002Fgithub.com\u002FDongSky\u002FLPT)        |\n  | [通过动态再平衡进行长尾部分标签学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=sXfWoK4KvSW) | ICLR | 2023 | `CSL`  |  [官方](https:\u002F\u002Fgithub.com\u002FMediaBrain-SJTU\u002FRECORDS-LTPLL)        |\n  | [深入探讨语义尺度不平衡](https:\u002F\u002Fopenreview.net\u002Fpdf?id=07tc5kKRIo) | ICLR | 2023 | `CSL`,`RL` |       |\n  | [INPL：在不平衡半监督学习中首先对内点进行伪标签标注](https:\u002F\u002Fopenreview.net\u002Fpdf?id=m6ahb1mpwwX) | ICLR | 2023 | `TL` |       |\n  | [CUDA：用于长尾识别的数据增强课程](https:\u002F\u002Fopenreview.net\u002Fpdf?id=RgUPdudkWlN) | ICLR | 2023 | `增强` |  [官方](https:\u002F\u002Fgithub.com\u002FJianhongBai\u002FCOLT)        |\n  | [长尾学习需要特征学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=S-h1oFv-mq) | ICLR | 2023 | `强化学习`  |     | \n  | [基于随机表示的长尾分类解耦训练](https:\u002F\u002Fopenreview.net\u002Fpdf?id=bcYZwYo-0t) | ICLR | 2023 | `强化学习`,`DT` |      |\n\n### 2022年\n\n| 标题                                                        | 会议地点  | 年份 |       类型       |                             编码                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------: | :----------------------------------------------------------: |\n  | [用于测试无关长尾识别的自监督多样化专家聚合](https:\u002F\u002Fopenreview.net\u002Fpdf?id=m7CmxlpHTiu) | NeurIPS | 2022 | `CSL`,`Ensemble` |    [官方](https:\u002F\u002Fgithub.com\u002FVanint\u002FSADE-AgnosticLT)     |\n  | [SoLar：用于不平衡部分标签学习的Sinkhorn标签精炼器](https:\u002F\u002Fopenreview.net\u002Fpdf?id=wUUutywJY6) | NeurIPS | 2022 | `CSL` |    [官方](https:\u002F\u002Fgithub.com\u002Fhbzju\u002FSoLar)     |\n  | [深度神经网络的末端是否真的需要可学习的分类器？](https:\u002F\u002Fopenreview.net\u002Fpdf?id=A6EmxI3_Xc) | NeurIPS | 2022 |    `RL`,`CD`     |                                                              |\n  | [以矩阵形式实现的最大类别分离作为归纳偏置](https:\u002F\u002Fopenreview.net\u002Fpdf?id=MbVS6BuJ3ql) | NeurIPS | 2022 |   `CD`     |         [官方](https:\u002F\u002Fgithub.com\u002Ftkasarla\u002Fmax-separation-as-inductive-bias)     |                                                       |\n  | [在类别不平衡数据上通过逃离鞍点实现有效泛化](https:\u002F\u002Fopenreview.net\u002Fpdf?id=9DYKrsFSU2) | NeurIPS | 2022 | `other` |    [官方](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002FSaddle-LongTail)     |                                           |\n  | [面包屑：用于长尾识别的对抗性类别平衡采样](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840628.pdf) | ECCV | 2022 | `Sampling`,`Aug`,`DT` |    [官方](https:\u002F\u002Fgithub.com\u002FBoLiu-SVCL\u002FBreadcrumbs)     |\n  | [从不平衡中构建平衡以进行长尾图像识别](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136800036.pdf) | ECCV | 2022 | `Sampling`,`RL` |    [官方](https:\u002F\u002Fgithub.com\u002Fsilicx\u002FDLSA)     |\n  | [应对领域迁移下的长尾类别分布](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136830706.pdf) | ECCV | 2022 | `CSL`,`Aug`,`RL` |    [官方](https:\u002F\u002Fgithub.com\u002Fguxiao0822\u002Flt-ds)     |\n  | [通过组谱正则化改进面向长尾数据的GAN](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136750423.pdf) | ECCV | 2022 | `CSL`,`Other` |    [官方](https:\u002F\u002Fgithub.com\u002Fval-iisc\u002FgSRGAN)     |\n  | [为长尾视觉识别学习类别的视觉-语言表示](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136850072.pdf) | ECCV | 2022 | `TL`,`RL` |    [官方](https:\u002F\u002Fgithub.com\u002FChangyaoTian\u002FVL-LTR)     |\n  | [利用免费对象分割进行长尾实例分割的学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136700648.pdf) | ECCV | 2022 | `Aug` |       |\n  | [SAFA：用于长尾图像分类的样本自适应特征增强](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840578.pdf) | ECCV | 2022 | `Aug`,`RL` |    | \n  | [关于多域长尾识别、不平衡域泛化及更多内容](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136800054.pdf) | ECCV | 2022 | `RL`|    [官方](https:\u002F\u002Fgithub.com\u002FYyzHarry\u002Fmulti-domain-imbalance)     | \n  | [用于广义长尾分类的不变特征学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840698.pdf) | ECCV | 2022 | `RL`|    [官方](https:\u002F\u002Fgithub.com\u002FKaihuaTang\u002FGeneralized-Long-Tailed-Benchmarks.pytorch)     |\n  | [通过分布重叠系数实现校准的超球面表示，用于长尾学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136840176.pdf) | ECCV | 2022 | `RL`,`CD` |    [官方](https:\u002F\u002Fgithub.com\u002FSiLangWHL\u002FvMF-OP)     |\n  | [使用Gumbel优化损失函数的长尾实例分割](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136700349.pdf) | ECCV | 2022 | `CD` |    [官方](https:\u002F\u002Fgithub.com\u002Fkostas1515\u002FGOL)     |\n  | [长尾类别增量学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136930486.pdf) | ECCV | 2022 | `DT` |    [官方](https:\u002F\u002Fgithub.com\u002Fxialeiliu\u002FLong-Tailed-CIL)     |\n  | [识别长尾样本分布中的难样本噪声](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2022\u002Fpapers_ECCV\u002Fpapers\u002F136860725.pdf) | ECCV | 2022 | `Other` |    [官方](https:\u002F\u002Fgithub.com\u002Fyxymessi\u002FH2E-Framework)     |\n  | [通过成对类别平衡缓解长尾实例分割问题](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.02784.pdf) |  CVPR   | 2022 |      `CSL`       |      [官方](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FPCB)      |\n  | [多数可以帮助少数：面向长尾分类的上下文丰富型少数类过采样](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.00412.pdf) |  CVPR   | 2022 |    `TL`,`Aug`    |         [官方](https:\u002F\u002Fgithub.com\u002Fnaver-ai\u002Fcmo)          |\n  | [通过组合式知识迁移进行长尾识别](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.06741.pdf) |  CVPR   | 2022 |    `TL`,`RL`     |                                                              |\n  | [BatchFormer：学习探索样本关系以进行稳健的表征学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.01522.pdf) |  CVPR   | 2022 |    `TL`,`RL`     |      [官方](https:\u002F\u002Fgithub.com\u002Fzhihou7\u002FBatchFormer)      |\n  | [用于长尾视觉识别的嵌套协作学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.15359.pdf) |  CVPR   | 2022 | `RL`,`Ensemble`  |        [官方](https:\u002F\u002Fgithub.com\u002FBazinga699\u002FNCL)         |\n  | [通过权重平衡进行长尾识别](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.14197.pdf) |  CVPR   | 2022 |       `DT`       | [官方](https:\u002F\u002Fgithub.com\u002FShadeAlsha\u002FLTR-weight-balancing) |\n  | [面向领域自适应语义分割的类别平衡像素级自标签](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.09744.pdf) |  CVPR   | 2022 |     `other`      |          [官方](https:\u002F\u002Fgithub.com\u002Flslrh\u002FCPSL)           |\n  | [一石二鸟：通过部分全连接层高效且稳健地训练人脸识别CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.15565.pdf) |  CVPR   | 2022 |     `other`      | [官方](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface\u002Ftree\u002Fmaster\u002Frecognition) |\n  | [具有可学习代价矩阵的最优传输用于长尾识别](https:\u002F\u002Fopenreview.net\u002Fpdf?id=t98k9ePQQpn) |  ICLR   | 2022 |       `LA`       |                                                              |\n  | [深度网络是否会在不同类别之间传递不变性？](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Fn7i_r5rR0q) |  ICLR   | 2022 |    `TL`,`Aug`    | [官方](https:\u002F\u002Fgithub.com\u002FAllanYangZhou\u002Fgenerative-invariance-transfer) |\n  | [自监督学习对数据集不平衡更具鲁棒性](https:\u002F\u002Fopenreview.net\u002Fpdf?id=4AZz9osqrar) |  ICLR   | 2022 |       `RL`       |                                                              |\n\n### 2021年\n\n| 标题                                                        | 会议地点  | 年份 |             类型             |                             编号                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------------------------: | :----------------------------------------------------------: |\n  | [通过开放世界采样改进不平衡种子数据上的对比学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=EIfV-XAggKo) | NeurIPS | 2021 |    `采样`,`TL`, `DC`     |        [官方](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FMAK)         |\n  | [通过自适应均衡学习进行半监督语义分割](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb98249b38337c5088bbc660d8f872d6a-Paper.pdf) | NeurIPS | 2021 | `采样`,`CSL`,`TL`, `Aug` |      [官方](https:\u002F\u002Fgithub.com\u002Fhzhupku\u002FSemiSeg-AEL)      |\n  | [关于长尾目标检测和实例分割的模型校准](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F14ad095ecc1c3e1b87f3c522836e9158-Paper.pdf) | NeurIPS | 2021 |             `LA`             |         [官方](https:\u002F\u002Fgithub.com\u002Ftydpan\u002FNorCal)         |\n  | [在过参数化条件下处理标签不平衡和群体敏感分类问题](https:\u002F\u002Fopenreview.net\u002Fpdf?id=UZm2IQhgIyB) | NeurIPS | 2021 |             `LA`             |                                                              |\n  | [从先验视角出发，构建面向长尾视觉识别的校准模型](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Ffile\u002F39ae2ed11b14a4ccb41d35e9d1ba5d11-Paper.pdf) | NeurIPS | 2021 |         `Aug`, `RL`          |     [官方](https:\u002F\u002Fgithub.com\u002FXuZhengzhuo\u002FPrior-LT)      |\n  | [利用基于能量的对比表示迁移增强不平衡数据学习](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fb151ce4935a3c2807e1dd9963eda16d8-Paper.pdf) | NeurIPS | 2021 |      `Aug`, `TL`, `RL`       |         [官方](https:\u002F\u002Fgithub.com\u002FZidiXiu\u002FECRT)          |\n  | [VideoLT：大规模长尾视频识别](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.02668.pdf) |  ICCV   | 2021 |          `采样`          |       [官方](https:\u002F\u002Fgithub.com\u002F17Skye17\u002FVideoLT)        |\n  | [探索长尾目标检测中的分类平衡](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.07507.pdf) |  ICCV   | 2021 |       `采样`,`CSL`       |          [官方](https:\u002F\u002Fgithub.com\u002Ffcjian\u002FLOCE)          |\n  | [GistNet：用于长尾识别的几何结构迁移网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.00131.pdf) |  ICCV   | 2021 |    `采样`,`TL`, `DC`     |                                                              |\n  | [FASA：用于长尾实例分割的特征增强与采样适配](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.12867.pdf) |  ICCV   | 2021 |       `采样`,`CSL`       |                                                              |\n  | [ACE：联合互补专家解决一次性长尾识别问题](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.02385.pdf) |  ICCV   | 2021 |    `采样`,`集成`     | [官方](https:\u002F\u002Fgithub.com\u002Fjrcai\u002FACE?utm_source=catalyzex.com) |\n  | [不平衡视觉分类的影响平衡损失](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.02444.pdf) |  ICCV   | 2021 |            `CSL`             |        [官方](https:\u002F\u002Fgithub.com\u002Fpseulki\u002FIB-Loss)        |\n  | [为半监督语义分割重新分配有偏伪标签：一项基线研究](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.11279.pdf) |  ICCV   | 2021 |             `TL`             |         [官方](https:\u002F\u002Fgithub.com\u002FCVMI-Lab\u002FDARS)         |\n  | [自监督到蒸馏：用于长尾视觉识别的方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.04075.pdf) |  ICCV   | 2021 |             `TL`             |        [官方](https:\u002F\u002Fgithub.com\u002FMCG-NJU\u002FSSD-LT)         |\n  | [为长尾识别提炼虚拟样本](https:\u002F\u002Fcs.nju.edu.cn\u002Fwujx\u002Fpaper\u002FICCV2021_DiVE.pdf) |  ICCV   | 2021 |             `TL`             |                                                              |\n  | [MosaicOS：一种简单有效的以目标为中心的图像用于长尾目标检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.08884.pdf) |  ICCV   | 2021 |             `TL`             |     [官方](https:\u002F\u002Fgithub.com\u002Fczhang0528\u002FMosaicOS\u002F)      |\n  | [参数化对比学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.12028.pdf) |  ICCV   | 2021 |             `RL`             | [官方](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FParametric-Contrastive-Learning) |\n  | [用于长尾学习的分布鲁棒性损失](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.03066.pdf) |  ICCV   | 2021 |             `RL`             |       [官方](https:\u002F\u002Fgithub.com\u002Fdvirsamuel\u002FDRO-LT)       |\n  | [视觉关系的学习：魔鬼藏在细节中](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.09668.pdf) |  ICCV   | 2021 |             `DT`             |                                                              |\n  | [图像级还是目标级？两种针对长尾检测的重采样策略](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.05702.pdf) |  ICML   | 2021 |          `采样`          |          [官方](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FRIO)           |\n  | [自我损害的对比学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.02990.pdf) |  ICML   | 2021 |          `TL`,`RL`           |       [官方](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FSDCLR)        |\n  | [深入探讨深度不平衡回归](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.09554.pdf) |  ICML   | 2021 |           `其他`            | [官方](https:\u002F\u002Fgithub.com\u002FYyzHarry\u002Fimbalanced-regression) |\n  | [通过均匀采样和再平衡采样的协同训练实现长尾多标签视觉识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FGuo_Long-Tailed_Multi-Label_Visual_Recognition_by_Collaborative_Training_on_Uniform_and_CVPR_2021_paper.pdf) |  CVPR   | 2021 |    `采样`,`集成`     |                                                              |\n  | [均衡损失v2：一种用于长尾目标检测的新梯度平衡方法](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FTan_Equalization_Loss_v2_A_New_Gradient_Balance_Approach_for_Long-Tailed_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |       [官方](https:\u002F\u002Fgithub.com\u002Ftztztztztz\u002Feqlv2)        |\n  | [长尾实例分割的跷跷板损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Seesaw_Loss_for_Long-Tailed_Instance_Segmentation_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |    [官方](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)     |\n  | [长尾目标检测的自适应类别抑制损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Adaptive_Class_Suppression_Loss_for_Long-Tail_Object_Detection_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |      [官方](https:\u002F\u002Fgithub.com\u002FCASIA-IVA-Lab\u002FACSL)       |\n  | [PML：用于长尾年龄分类的渐进式间隔损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FDeng_PML_Progressive_Margin_Loss_for_Long-Tailed_Age_Classification_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `CSL`             |                                                              |\n  | [解耦长尾视觉识别中的标签分布](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FHong_Disentangling_Label_Distribution_for_Long-Tailed_Visual_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |          `CSL`,`LA`          |       [官方](https:\u002F\u002Fgithub.com\u002Fhyperconnect\u002FLADE)       |\n  | [长尾分布下的对抗鲁棒性](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWu_Adversarial_Robustness_Under_Long-Tailed_Distribution_CVPR_2021_paper.pdf) |  CVPR   | 2021 |       `CSL`,`LA`,`CD`        | [官方](https:\u002F\u002Fgithub.com\u002Fwutong16\u002FAdversarial_Long-Tail) |\n  | [分布对齐：长尾视觉识别的统一框架](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Distribution_Alignment_A_Unified_Framework_for_Long-Tail_Visual_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |       `CSL`,`LA`,`DT`        | [官方](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FDisAlign) |\n  | [改进长尾识别的校准](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhong_Improving_Calibration_for_Long-Tailed_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |       `CSL`,`Aug`,`DT`       |     [官方](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMiSLAS)     |\n  | [CReST：用于不平衡半监督学习的类再平衡自训练框架](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWei_CReST_A_Class-Rebalancing_Self-Training_Framework_for_Imbalanced_Semi-Supervised_Learning_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `TL`             |     [官方](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fcrest)     |\n  | [Conceptual 12M：将网络规模的图文预训练推向长尾视觉概念的识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FChangpinyo_Conceptual_12M_Pushing_Web-Scale_Image-Text_Pre-Training_To_Recognize_Long-Tail_Visual_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `TL`             | [官方](https:\u002F\u002Fgithub.com\u002Fgoogle-research-datasets\u002Fconceptual-12m) |\n  | [RSG：一个简单但有效的不平衡数据学习模块](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fsupplemental\u002FWang_RSG_A_Simple_CVPR_2021_supplemental.pdf) |  CVPR   | 2021 |          `TL`,`Aug`          |        [官方](https:\u002F\u002Fgithub.com\u002FJianf-Wang\u002FRSG)         |\n  | [MetaSAug：面向长尾视觉识别的元语义增强](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLi_MetaSAug_Meta_Semantic_Augmentation_for_Long-Tailed_Visual_Recognition_CVPR_2021_paper.pdf) |  CVPR   | 2021 |            `Aug`             |        [官方](https:\u002F\u002Fgithub.com\u002FBIT-DA\u002FMetaSAug)        |\n  | [基于对比学习的混合网络用于长尾图像分类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWang_Contrastive_Learning_Based_Hybrid_Networks_for_Long-Tailed_Image_Classification_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `RL`             |                                                              |\n  | [利用层次化自监督无监督发现实例分割中的长尾部分](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FWeng_Unsupervised_Discovery_of_the_Long-Tail_in_Instance_Segmentation_Using_Hierarchical_CVPR_2021_paper.pdf) |  CVPR   | 2021 |             `RL`             |                                                              |\n  | [通过logit调整进行长尾学习](https:\u002F\u002Fopenreview.net\u002Fpdf?id=37nvvqkCo5) |  ICLR   | 2021 |             `LA`             | [官方](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fgoogle-research\u002Ftree\u002Fmaster\u002Flogit_adjustment) |\n  | [通过路由多样化的分布感知专家实现长尾识别](https:\u002F\u002Fopenreview.net\u002Fpdf?id=D9I3drBz4UC) |  ICLR   | 2021 |       `TL`,`集成`        | [官方](https:\u002F\u002Fgithub.com\u002Ffrank-xwang\u002FRIDE-LongTailRecognition) |\n  | [探索用于表示学习的平衡特征空间](https:\u002F\u002Fopenreview.net\u002Fpdf?id=OqtLIabPTit) |  ICLR   | 2021 |          `RL`,`DT`           |                                                              |\n\n### 2020\n\n  | 标题                                                        | 会议\u002F期刊 | 年份 |              类型               |                             代码                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :-----------------------------: | :----------------------------------------------------------: |\n  | [用于长尾视觉识别的平衡元Softmax](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F2ba61cc3a8f44143e1f2f13b2b729ab3-Paper.pdf) | NeurIPS | 2020 |        `采样`,`CSL`         | [官方](https:\u002F\u002Fgithub.com\u002Fjiawei-ren\u002FBalancedMetaSoftmax) |\n  | [针对不均衡数据集的后验重校准](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F5ca359ab1e9e3b9c478459944a2d9ca5-Paper.pdf) | NeurIPS | 2020 |              `LA`               |        [官方](https:\u002F\u002Fgithub.com\u002FGT-RIPL\u002FUNO-IC)         |\n  | [通过保留良性动量并消除不良动量因果效应进行长尾分类](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002F1091660f3dff84fd648efe31391c5524-Paper.pdf) | NeurIPS | 2020 |            `LA`,`CD`            | [官方](https:\u002F\u002Fgithub.com\u002FKaihuaTang\u002FLong-Tailed-Recognition.pytorch) |\n  | [重新思考标签在改善类别不平衡学习中的价值](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Ffile\u002Fe025b6279c1b88d3ec0eca6fcb6e6280-Paper.pdf) | NeurIPS | 2020 |            `TL`,`RL`            | [官方](https:\u002F\u002Fgithub.com\u002FYyzHarry\u002Fimbalanced-semi-self) |\n  | [魔鬼藏在分类中：一种用于长尾实例分割的简单框架](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123590715.pdf) |  ECCV   | 2020 |   `采样`,`DT`,`集成`    |        [官方](https:\u002F\u002Fgithub.com\u002Ftwangnh\u002FSimCal)         |\n  | [基于分区水库采样的不均衡持续学习](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123580409.pdf) |  ECCV   | 2020 |           `采样`            |          [官方](https:\u002F\u002Fgithub.com\u002Fcdjkim\u002FPRS)           |\n  | [用于长尾数据集多标签分类的分布平衡损失](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123490154.pdf) |  ECCV   | 2020 |              `CSL`              | [官方](https:\u002F\u002Fgithub.com\u002Fwutong16\u002FDistributionBalancedLoss) |\n  | [针对长尾数据的特征空间增强](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123740681.pdf) |  ECCV   | 2020 |         `TL`,`增强`,`DT`         |                                                              |\n  | [向多位专家学习：用于长尾分类的自定节奏知识蒸馏](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123500239.pdf) |  ECCV   | 2020 |         `TL`,`集成`         |        [官方](https:\u002F\u002Fgithub.com\u002Fxiangly55\u002FLFME)         |\n  | [使用深度真实分类学分类器解决长尾识别问题](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123530171.pdf) |  ECCV   | 2020 |              `CD`               |       [官方](https:\u002F\u002Fgithub.com\u002Fgina9726\u002FDeep-RTC)       |\n  | [学习分割尾巴](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FHu_Learning_to_Segment_the_Tail_CVPR_2020_paper.pdf) |  CVPR   | 2020 |         `采样`,`TL`         |     [官方](https:\u002F\u002Fgithub.com\u002FJoyHuYY1412\u002FLST_LVIS)      |\n  | [BBN：具有累积学习功能的双分支网络，用于长尾视觉识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhou_BBN_Bilateral-Branch_Network_With_Cumulative_Learning_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.pdf) |  CVPR   | 2020 |      `采样`,`集成`      |      [官方](https:\u002F\u002Fgithub.com\u002FMegvii-Nanjing\u002FBBN)       |\n  | [利用平衡组Softmax克服长尾目标检测中的分类器不平衡](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLi_Overcoming_Classifier_Imbalance_for_Long-Tail_Object_Detection_With_Balanced_Group_CVPR_2020_paper.pdf) |  CVPR   | 2020 |      `采样`,`集成`      | [官方](https:\u002F\u002Fgithub.com\u002FFishYuLi\u002FBalancedGroupSoftmax) |\n  | [从领域适应视角重新思考用于长尾视觉识别的类别平衡方法](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FJamal_Rethinking_Class-Balanced_Methods_for_Long-Tailed_Visual_Recognition_From_a_Domain_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `CSL`              |   [官方](https:\u002F\u002Fgithub.com\u002Fabdullahjamal\u002FLongtail_DA)   |\n  | [用于长尾目标识别的均衡化损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FTan_Equalization_Loss_for_Long-Tailed_Object_Recognition_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `CSL`              |       [官方](https:\u002F\u002Fgithub.com\u002Ftztztztztz\u002Feqlv2)        |\n  | [领域平衡：长尾领域上的人脸识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FCao_Domain_Balancing_Face_Recognition_on_Long-Tailed_Domains_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `CSL`              |                                                              |\n  | [M2m：通过多数到少数的转换进行不均衡分类](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FKim_M2m_Imbalanced_Classification_via_Major-to-Minor_Translation_CVPR_2020_paper.pdf) |  CVPR   | 2020 |           `TL`,`增强`            |          [官方](https:\u002F\u002Fgithub.com\u002Falinlab\u002FM2m)          |\n  | [长尾数据上的深度表示学习：可学习嵌入增强视角](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FLiu_Deep_Representation_Learning_on_Long-Tailed_Data_A_Learnable_Embedding_Augmentation_CVPR_2020_paper.pdf) |  CVPR   | 2020 |         `TL`,`增强`,`RL`         |                                                              |\n  | [利用区域自注意力扩充情景记忆以进行长尾视觉识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhu_Inflated_Episodic_Memory_With_Region_Self-Attention_for_Long-Tailed_Visual_Recognition_CVPR_2020_paper.pdf) |  CVPR   | 2020 |              `RL`               |                                                              |\n  | [解耦表示与分类器以进行长尾识别](https:\u002F\u002Fopenreview.net\u002Fpdf?id=r1gRTCVFvB) |  ICLR   | 2020 | `采样`,`CSL`,`RL`,`CD`,`DT` | [官方](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fclassifier-balancing) |\n\n\n  ### 2019\n\n| 标题                                                        | 会议\u002F期刊   | 年份 |    类型    |                             代码                             |\n  | :----------------------------------------------------------- | :-----: | :--: | :--------: | :----------------------------------------------------------: |\n  | [元加权网络：学习样本加权的显式映射](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002Fe58cc5ca94270acaceed13bc82dfedf7-Paper.pdf) | NeurIPS | 2019 |   `CSL`    |  [官方](https:\u002F\u002Fgithub.com\u002Fxjtushujun\u002Fmeta-weight-net)   |\n  | [基于标签分布感知间隔损失的学习不平衡数据集](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2019\u002Ffile\u002F621461af90cadfdaf0e8d4cc25129f91-Paper.pdf) | NeurIPS | 2019 |   `CSL`    |        [官方](https:\u002F\u002Fgithub.com\u002Fkaidic\u002FLDAM-DRW)        |\n  | [面向不平衡数据分类的动态课程学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FWang_Dynamic_Curriculum_Learning_for_Imbalanced_Data_Classification_ICCV_2019_paper.pdf) |  ICCV   | 2019 | `Sampling` |                                                              |\n  | [基于有效样本数的类别平衡损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FCui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf) |  CVPR   | 2019 |   `CSL`    | [官方](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fclass-balanced-loss) |\n  | [用不确定性取得恰当平衡](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FKhan_Striking_the_Right_Balance_With_Uncertainty_CVPR_2019_paper.pdf) |  CVPR   | 2019 |   `CSL`    |                                                              |\n  | [针对欠代表数据的人脸识别特征迁移学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FYin_Feature_Transfer_Learning_for_Face_Recognition_With_Under-Represented_Data_CVPR_2019_paper.pdf) |  CVPR   | 2019 | `TL`,`Aug` |                                                              |\n  | [长尾噪声数据下的深度人脸识别不均衡训练](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhong_Unequal-Training_for_Deep_Face_Recognition_With_Long-Tailed_Noisy_Data_CVPR_2019_paper.pdf) |  CVPR   | 2019 |    `RL`    | [官方](https:\u002F\u002Fgithub.com\u002Fzhongyy\u002FUnequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data) |\n  | [开放世界中的大规模长尾识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLiu_Large-Scale_Long-Tailed_Recognition_in_an_Open_World_CVPR_2019_paper.pdf) |  CVPR   | 2019 |    `RL`    | [官方](https:\u002F\u002Fgithub.com\u002Fzhmiao\u002FOpenLongTailRecognition-OLTR) |\n\n  ### 2018年\n\n  | 标题                                                        | 会议\u002F期刊 | 年份 | 类型 |                             代码                             |\n  | :----------------------------------------------------------- | :---: | :--: | :--: | :----------------------------------------------------------: |\n  | [大规模细粒度分类与领域特定的迁移学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FCui_Large_Scale_Fine-Grained_CVPR_2018_paper.pdf) | CVPR  | 2018 | `TL` | [官方](https:\u002F\u002Fgithub.com\u002Frichardaecn\u002Fcvpr18-inaturalist-transfer) |\n\n  ### 2017年\n\n  | 标题                                                        | 会议\u002F期刊   | 年份 | 类型  | 代码 |\n  | :----------------------------------------------------------- | :-----: | :--: | :---: | :--: |\n  | [学习建模长尾部分](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2017\u002Ffile\u002F147ebe637038ca50a1265abac8dea181-Paper.pdf) | NeurIPS | 2017 | `CSL` |      |\n  | [密集目标检测中的焦点损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLin_Focal_Loss_for_ICCV_2017_paper.pdf) |  ICCV   | 2017 | `CSL` |      |\n  | [长尾训练数据下深度人脸识别的范围损失](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FZhang_Range_Loss_for_ICCV_2017_paper.pdf) |  ICCV   | 2017 | `RL`  |      |\n  | [不平衡深度学习中的类别校正困难挖掘](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FDong_Class_Rectification_Hard_ICCV_2017_paper.pdf) |  ICCV   | 2017 | `RL`  |      |\n\n  ### 2016年\n\n  | 标题                                                        | 会议\u002F期刊 | 年份 |      类型       | 代码 |\n  | :----------------------------------------------------------- | :---: | :--: | :-------------: | :--: |\n  | [用于不平衡分类的深度表示学习](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FHuang_Learning_Deep_Representation_CVPR_2016_paper.pdf) | CVPR  | 2016 | `Sampling`,`RL` |      |\n  | [长尾分布下目标检测深度模型微调的影响因素](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FOuyang_Factors_in_Finetuning_CVPR_2016_paper.pdf) | CVPR  | 2016 |   `CSL`,`RL`    |      |\n\n  ## 3. 基准数据集\n\n  | 数据集          |      长尾任务      | # 类别 | # 训练数据 | # 测试数据 |\n  | :--------------- | :------------------------: | :-----: | :-------------: | :---------: |\n  | ImageNet-LT      |       分类       |  1,000  |     115,846     |   50,000    |\n  | CIFAR100-LT      |       分类       |   100   |     50,000      |   10,000    |\n  | Places-LT        |       分类       |   365   |     62,500      |   36,500    |\n  | iNaturalist 2018 |       分类       |  8,142  |     437,513     |   24,426    |\n  | LVIS v0.5        | 检测和分割 |  1,230  |     57,000      |   20,000    |\n  | LVIS v1          | 检测和分割 |  1,203  |     100,000     |   19,800    |\n  | VOC-LT           | 多标签分类 |   20    |      1,142      |    4,952    |\n  | COCO-LT          | 多标签分类 |   80    |      1,909      |    5,000    |\n  | VideoLT          |    视频分类    |  1,004  |     179,352     |   25,622    |\n\n  ## 4. 我们的代码库\n\n* 要使用我们的代码库，请安装依赖项：\n    ```\n    pip install -r requirements.txt\n    ```\n  * 硬件要求：建议使用4张显存≥23GB的GPU。\n  * ImageNet-LT数据集：请下载ImageNet-1K数据集，并将其放置在.\u002Fdata文件夹中。\n    ```\n    data\n    └──ImageNet\n        ├── train\n        └── val\n    ```\n  * Softmax：\n    ```\n    cd .\u002FMain-codebase \n    训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fce.yaml  --exp_name imagenet\u002FCE  --gpu 0,1,2,3 \n    ```\n  * 加权Softmax：\n    ```\n    cd .\u002FMain-codebase \n    训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fweighted_ce.yaml  --exp_name imagenet\u002Fweighted_ce  --gpu 0,1,2,3\n    ```\n  * ESQL（均衡化损失）：\n    ```\n    cd .\u002FMain-codebase \n    训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fseql.yaml  --exp_name imagenet\u002Fseql  --gpu 0,1,2,3\n    ```\n  * 平衡Softmax：\n    ```\n    cd .\u002FMain-codebase \n    训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fbalanced_softmax.yaml  --exp_name imagenet\u002FBS  --gpu 0,1,2,3\n    ```\n  * LADE：\n    ```\n    cd .\u002FMain-codebase \n    训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Flade.yaml  --exp_name imagenet\u002FLADE  --gpu 0,1,2,3\n    ```\n  * 去混杂（因果）：\n    ```\n    cd .\u002FMain-codebase \n    训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fcausal.yaml  --exp_name imagenet\u002Fcausal --remine_lambda 0.1 --alpha 0.005 --gpu 0,1,2,3\n    ```\n  * 解耦（IB-CRT）：\n    ```\n    cd .\u002FMain-codebase \n    第一阶段训练：python3 main.py --seed 1 --cfg config\u002FImageNet_LT\u002Fce.yaml  --exp_name imagenet\u002FCE  --gpu 0,1,2,3 \n    第二阶段训练：python3  main.py --cfg .\u002Fconfig\u002FImageNet_LT\u002Fcls_crt.yaml --model_dir exp_results\u002Fimagenet\u002FCE\u002Ffinal_model_checkpoint.pth  --gpu 0,1,2,3 \n    ```\n  * MiSLAS：\n    ```\n    cd .\u002FMiSLAS-codebase\n    第一阶段训练：CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train_stage1.py --cfg config\u002Fimagenet\u002Fimagenet_resnext50_stage1_mixup.yaml\n    第二阶段训练：CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train_stage2.py --cfg config\u002Fimagenet\u002Fimagenet_resnext50_stage2_mislas.yaml resume checkpoint_path\n    评估：CUDA_VISIBLE_DEVICES=0  python3 eval.py --cfg .\u002Fconfig\u002Fimagenet\u002Fimagenet_resnext50_stage2_mislas.yaml  resume checkpoint_path_stage2\n    ```\n  * RSG：\n    ```\n    cd .\u002FRSG-codebase\n    训练：python3 imagenet_lt_train.py \n    评估：python3 imagenet_lt_test.py \n    ```\n  * ResLT：\n    ```\n    cd .\u002FResLT-codebase\n    训练：CUDA_VISIBLE_DEVICES=0,1,2,3 bash sh\u002FX50.sh\n    评估：CUDA_VISIBLE_DEVICES=0 bash sh\u002FX50_eval.sh\n    # 测试性能可在日志文件中找到。\n    ```\n  * PaCo：\n    ```\n    cd .\u002FPaCo-codebase\n    训练：CUDA_VISIBLE_DEVICES=0,1,2,3 bash sh\u002FImageNetLT_train_X50.sh\n    评估：CUDA_VISIBLE_DEVICES=0 bash sh\u002FImageNetLT_eval_X50.sh\n    # 测试性能可在日志文件中找到。\n    ```\n  * LDAM：\n    ```\n    cd .\u002FEnsemble-codebase \n    训练：CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c .\u002Fconfigs\u002Fconfig_imagenet_lt_resnext50_ldam.json\n    评估：CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path\n    ```\n  * RIDE：\n    ```\n    cd .\u002FEnsemble-codebase \n    训练：CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c .\u002Fconfigs\u002Fconfig_imagenet_lt_resnext50_ride.json\n    评估：CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path\n    ```\n  * SADE：\n    ```\n    cd .\u002FEnsemble-codebase \n    训练：CUDA_VISIBLE_DEVICES=0,1,2,3 python3 train.py -c .\u002Fconfigs\u002Fconfig_imagenet_lt_resnext50_sade.json\n    评估：CUDA_VISIBLE_DEVICES=0 python3 test.py -r checkpoint_path\n    ```\n\n  ## 5. 实证研究\n\n  ### (1) 长尾基准测试性能\n\n  * 我们在ImageNet-LT上评估了几种最先进的方法，以了解它们通过新的评估指标（即UA和RA）在多大程度上处理类别不平衡问题。我们根据类别重平衡（CR）、信息增强（IA）和模块改进（MI）对这些方法进行了分类。\n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_5e127bd8ee68.png\" width=900>\n  \u003C\u002Fp>\n  \n\n  * 几乎所有长尾方法在准确率方面都优于Softmax基线，这表明长尾学习的有效性。\n  * 对于大多数长尾方法来说，训练200个epoch能够带来更好的性能，因为充分的训练使深度模型能够更好地拟合数据并学习更好的图像表示。\n  * 除了准确率之外，我们还基于UA和RA评估了长尾方法。对于具有较高UA的方法，其性能提升不仅来自于类别不平衡的缓解，还可能来自其他因素，如数据增强或更好的网络架构。因此，仅使用准确率进行评估并不够准确，而我们提出的RA指标则是一个很好的补充，因为它可以减轻类别不平衡以外因素的影响。\n  * 例如，基于数据混合的MiSLAS在90个训练epoch下，其准确率高于平衡Softmax，但它的UA也更高。因此，MiSLAS的相对准确率低于平衡Softmax，这意味着在90个训练epoch下，平衡Softmax比MiSLAS更能有效缓解类别不平衡。\n  * 尽管一些近期高精度的方法RA较低，但从下图可以看出，长尾学习的整体发展趋势仍然是积极的。\n\n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_2f21959c8e4a.png\" width=900>\n  \u003C\u002Fp>\n  \n\n  * 目前在准确率和RA方面都处于最先进水平的长尾方法是SADE（一种基于集成的方法）。\n\n  ### (2) 关于代价敏感损失的更多讨论\n\n  * 我们进一步评估了基于解耦训练方案的不同代价敏感学习损失的表现。\n  * 与联合训练相比，解耦训练可以进一步提高除平衡Softmax（BS）之外的大多数代价敏感学习方法的整体性能。\n  * 尽管BS在单阶段训练下表现优于其他代价敏感损失，但在解耦训练下它们的表现却相当接近。这表明，虽然这些代价敏感损失在联合训练下的表现不同，但它们本质上学习到的特征表示质量是相似的。\n\n  \u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_readme_29fd15ef3f2c.png\" width=500>\n  \u003C\u002Fp>\n  \n\n\n  ## 5. 引用\n\n  如果本仓库对您有所帮助，请引用我们的综述。\n\n  ```\n  @article{zhang2023deep,\n        title={Deep long-tailed learning: A survey},\n        author={Zhang, Yifan and Kang, Bingyi and Hooi, Bryan and Yan, Shuicheng and Feng, Jiashi},\n        journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n        year={2023},\n        publisher={IEEE}\n  }\n  ```\n\n  ## 5. 其他资源\n\n- [Papers With Code：长尾学习](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Flong-tail-learning)\n  - [zzw-zwzhang\u002F长尾识别精选](https:\u002F\u002Fgithub.com\u002Fzzw-zwzhang\u002FAwesome-of-Long-Tailed-Recognition)\n  - [SADE：测试无关的长尾识别](https:\u002F\u002Fgithub.com\u002FVanint\u002FSADE-AgnosticLT)","# Awesome-LongTailed-Learning 快速上手指南\n\n`Awesome-LongTailed-Learning` 是一个针对长尾分布学习（Long-Tailed Learning）的开源资源汇总项目，源自 TPAMI 2023 综述论文。它整理了该领域的分类体系、顶级会议论文列表及相关代码库，旨在帮助开发者快速定位解决方案和复现前沿算法。\n\n> **注意**：本项目主要是一个**资源索引清单（Awesome List）**，而非单一的 Python 包。以下指南将指导你如何获取资源、配置环境并运行列表中提供的具体算法代码。\n\n## 1. 环境准备\n\n在开始使用前，请确保你的开发环境满足深度学习研究的基本需求。由于列表中涵盖的方法多基于 PyTorch，推荐以下配置：\n\n*   **操作系统**: Linux (Ubuntu 18.04\u002F20.04\u002F22.04) 或 macOS\n*   **Python 版本**: 3.8 - 3.10 (推荐 3.9)\n*   **深度学习框架**: PyTorch >= 1.9.0\n*   **硬件要求**: 建议使用 NVIDIA GPU (显存 >= 8GB)，部分大模型或检测任务可能需要更高配置。\n*   **前置依赖**:\n    *   `git`: 用于克隆仓库\n    *   `pip` 或 `conda`: 包管理工具\n    *   `CUDA Toolkit`: 需与 PyTorch 版本匹配\n\n**国内加速建议**：\n推荐使用清华源或阿里源加速 Python 包和 Git 克隆，以提升下载速度。\n\n## 2. 安装步骤\n\n由于本项目是论文与代码的集合，你需要先克隆主仓库获取列表，然后根据需要选择具体的子项目（论文代码）进行安装。\n\n### 第一步：克隆主仓库\n获取最新的长尾学习论文列表和分类信息。\n\n```bash\n# 使用国内镜像加速克隆\ngit clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002FAwesome-LongTailed-Learning.git\n# 或者从 GitHub 克隆（若网络通畅）\n# git clone https:\u002F\u002Fgithub.com\u002FKaihuaTang\u002FAwesome-LongTailed-Learning.git\n\ncd Awesome-LongTailed-Learning\n```\n\n### 第二步：选择并安装具体算法\n浏览 `README.md` 中的论文列表（如 2024\u002F2025 年顶会论文），找到你感兴趣的方法及其官方代码链接（Code 列）。\n\n以列表中 **ICCV 2025** 的 `FedYoYo` (Federated Learning under Long-tailed Data) 为例：\n\n```bash\n# 1. 克隆特定算法的官方代码库\ngit clone https:\u002F\u002Fgithub.com\u002Fshanss132\u002FFedYoYo.git\ncd FedYoYo\n\n# 2. 创建虚拟环境 (推荐)\nconda create -n lt_learning python=3.9\nconda activate lt_learning\n\n# 3. 安装依赖 (使用国内 pip 源)\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 4. 安装 PyTorch (根据官网指令，此处以 CUDA 11.8 为例，使用清华源)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n```\n\n> **提示**：不同论文的代码依赖可能不同，请务必进入对应子项目的目录后，阅读其独立的 `README.md` 或 `requirements.txt` 进行安装。\n\n## 3. 基本使用\n\n本项目的核心用途是**查找方案**和**复现代码**。以下是典型的工作流：\n\n### 场景：复现一个长尾分类算法\n假设你要复现列表中的 `ConMix` (ICLR 2025, Contrastive Mixup) 方法。\n\n1.  **定位资源**：\n    在主列表中找到 `ConMix`，获取其代码地址：`https:\u002F\u002Fgithub.com\u002FLZX-001\u002FConMix`。\n\n2.  **获取数据**：\n    长尾学习通常使用 CIFAR-100-LT 或 ImageNet-LT 数据集。大多数代码库会自动下载或提供脚本生成长尾分布数据。\n    \n    ```bash\n    cd ConMix\n    # 示例：运行数据准备脚本 (具体命令视该项目而定)\n    python prepare_data.py --dataset cifar100 --imbalance_ratio 100\n    ```\n\n3.  **训练模型**：\n    执行训练脚本，指定长尾学习策略（如 Logit Adjustment 或 Re-sampling）。\n\n    ```bash\n    # 示例训练命令 (参考该项目具体文档)\n    python train.py \\\n      --dataset cifar100 \\\n      --network resnet32 \\\n      --loss_type ConMix \\\n      --batch_size 128 \\\n      --epochs 200\n    ```\n\n4.  **评估结果**：\n    查看输出日志中的 `Many-shot`, `Medium-shot`, `Few-shot` 准确率，这是评估长尾学习效果的关键指标。\n\n### 如何利用本列表进行研究\n*   **按类别筛选**：参考项目中的分类表（Type of Long-tailed Learning），根据你的需求查找对应符号的方法：\n    *   `Sampling`: 重采样策略\n    *   `LA` (Logit Adjustment): 对数几率调整\n    *   `Aug`: 数据增强\n    *   `TL`: 迁移学习\n    *   `RL`: 表示学习\n*   **追踪最新进展**：定期 Pull 更新主仓库，查看 2025 年 ICCV\u002FCVPR\u002FICML 的最新论文条目，直接点击标题阅读论文或跳转代码库。\n\n通过以上步骤，你可以高效地利用 `Awesome-LongTailed-Learning` 解决数据不平衡问题，快速搭建并验证先进的长尾学习模型。","某医疗影像初创公司的算法团队正在开发一款罕见皮肤病辅助诊断系统，面临训练数据中常见病症样本海量而罕见病症样本极少的严峻挑战。\n\n### 没有 Awesome-LongTailed-Learning 时\n- **模型严重偏科**：直接训练导致模型“偷懒”，只愿识别常见皮肤病，对罕见病的召回率几乎为零，无法满足临床筛查需求。\n- **试错成本高昂**：团队需从零复现论文中的重采样或损失函数调整策略，缺乏统一代码基准，耗费数周时间仍在调试超参数。\n- **技术选型盲目**：面对类平衡、信息增强、模块改进等三大类九小种技术方案，缺乏系统性对比，难以判断哪种最适合当前数据分布。\n- **泛化能力薄弱**：自行设计的简单加权方法在噪声干扰下表现不稳定，导致模型在实际医院部署时误报率居高不下。\n\n### 使用 Awesome-LongTailed-Learning 后\n- **性能显著提升**：直接调用库中经过验证的 `LA`（Logit Adjustment）或 `Decoupled Training` 代码，罕见病识别准确率在三天内从 15% 提升至 68%。\n- **研发效率飞跃**：依托 curated list 和标准化代码库，团队迅速定位到适合医疗场景的 `Representation Learning` 方案，将算法验证周期从数周缩短至两天。\n- **决策科学清晰**：参考综述中的分类图谱和实证分析，快速锁定“解耦训练 + 语义增强”组合策略，避免了盲目的技术试探。\n- **鲁棒性增强**：采用集成的前沿算法（如 2025 年 ICCV 收录的噪声标签处理方法），有效抵抗数据标注噪声，模型在医院实地测试中表现稳定。\n\nAwesome-LongTailed-Learning 通过提供系统化的理论指引与开箱即用的代码基座，让开发者在数据极度不平衡的场景下也能高效构建公平、精准的深度学习模型。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVanint_Awesome-LongTailed-Learning_c7de7c5f.png","Vanint","Yifan Zhang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FVanint_20746345.jpg",null,"MiroMind AI & National University of Singapore","Singapore","yifan.zhang@miromind.ai","https:\u002F\u002Fsites.google.com\u002Fview\u002Fyifan-zhang\u002F","https:\u002F\u002Fgithub.com\u002FVanint",[87,91],{"name":88,"color":89,"percentage":90},"Python","#3572A5",99.7,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0.3,1014,127,"2026-03-27T17:36:29",4,"","未说明",{"notes":102,"python":100,"dependencies":103},"该仓库是一个长尾学习（Long-Tailed Learning）领域的论文与资源清单（Awesome List），主要包含论文列表、分类体系及对应代码库的链接，本身不是一个可直接运行的单一软件工具，因此 README 中未提供具体的操作系统、硬件配置或依赖库安装要求。具体运行环境需参考列表中各独立论文对应的官方代码仓库。",[],[18],"2026-03-27T02:49:30.150509","2026-04-06T08:52:34.881857",[],[]]