[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-LAMDA-CL--PyCIL":3,"tool-LAMDA-CL--PyCIL":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":75,"owner_url":76,"languages":77,"stars":82,"forks":83,"last_commit_at":84,"license":85,"difficulty_score":10,"env_os":86,"env_gpu":87,"env_ram":86,"env_deps":88,"category_tags":99,"github_topics":101,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":112,"updated_at":113,"faqs":114,"releases":148},8876,"LAMDA-CL\u002FPyCIL","PyCIL","PyCIL: A Python Toolbox for Class-Incremental Learning","PyCIL 是一个专为“类增量学习”打造的 Python 工具箱，基于 PyTorch 构建。在人工智能领域，模型往往面临一个难题：当需要学习新类别的数据时，很容易遗忘旧知识，这种现象被称为“灾难性遗忘”。PyCIL 正是为了解决这一核心痛点而生，它提供了一套标准化的框架，帮助开发者在不重新训练所有历史数据的前提下，让模型持续、稳定地吸收新知识。\n\n这款工具非常适合人工智能研究人员、算法工程师以及高校学生使用。无论是想要复现前沿论文结果，还是希望快速验证自己的增量学习算法，PyCIL 都能极大降低入门门槛和开发成本。其最显著的技术亮点在于“大而全”的算法库：目前它已复现并集成了超过 20 种主流及最新的类增量学习方法，可能是当前开源社区中实现算法数量最多的工具箱之一。此外，项目维护活跃，不仅涵盖了传统方法，还紧跟技术潮流，及时纳入了基于预训练大模型（如 CLIP）的最新研究成果。通过 PyCIL，用户可以轻松进行公平的算法对比与性能评估，是探索持续学习领域不可或缺的得力助手。","# PyCIL: A Python Toolbox for Class-Incremental Learning\n\n---\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#Introduction\">Introduction\u003C\u002Fa> •\n  \u003Ca href=\"#Methods-Reproduced\">Methods Reproduced\u003C\u002Fa> •\n  \u003Ca href=\"#Reproduced-Results\">Reproduced Results\u003C\u002Fa> •  \n  \u003Ca href=\"#how-to-use\">How To Use\u003C\u002Fa> •\n  \u003Ca href=\"#license\">License\u003C\u002Fa> •\n  \u003Ca href=\"#Acknowledgments\">Acknowledgments\u003C\u002Fa> •\n  \u003Ca href=\"#Contact\">Contact\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_f1edff7623d9.png\" width=\"800px\">\n\u003C\u002Fdiv>\n\n---\n\n\n\n\u003Cdiv align=\"center\">\n\n[![LICENSE](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-green?style=flat-square)](https:\u002F\u002Fgithub.com\u002Fyaoyao-liu\u002Fclass-incremental-learning\u002Fblob\u002Fmaster\u002FLICENSE)[![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.8-blue.svg?style=flat-square&logo=python&color=3776AB&logoColor=3776AB)](https:\u002F\u002Fwww.python.org\u002F) [![PyTorch](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpytorch-1.8-%237732a8?style=flat-square&logo=PyTorch&color=EE4C2C)](https:\u002F\u002Fpytorch.org\u002F) [![method](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReproduced-20-success)]() [![CIL](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FClassIncrementalLearning-SOTA-success??style=for-the-badge&logo=appveyor)](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Fincremental-learning)\n![visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_d43c9891c46b.png)\n\n\u003C\u002Fdiv>\n\nWelcome to PyCIL, perhaps the toolbox for class-incremental learning with the **most** implemented methods. This is the code repository for \"PyCIL: A Python Toolbox for Class-Incremental Learning\" [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.12533) in PyTorch. If you use any content of this repo for your work, please cite the following bib entries:\n\n    @article{zhou2023pycil,\n        author = {Da-Wei Zhou and Fu-Yun Wang and Han-Jia Ye and De-Chuan Zhan},\n        title = {PyCIL: a Python toolbox for class-incremental learning},\n        journal = {SCIENCE CHINA Information Sciences},\n        year = {2023},\n        volume = {66},\n        number = {9},\n        pages = {197101},\n        doi = {https:\u002F\u002Fdoi.org\u002F10.1007\u002Fs11432-022-3600-y}\n      }\n    \n    @article{zhou2024class,\n        author = {Zhou, Da-Wei and Wang, Qi-Wei and Qi, Zhi-Hong and Ye, Han-Jia and Zhan, De-Chuan and Liu, Ziwei},\n        title = {Class-Incremental Learning: A Survey},\n        journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n        volume={46},\n        number={12},\n        pages={9851--9873},\n        year = {2024}\n    }\n\n    @inproceedings{zhou2024continual,\n        title={Continual learning with pre-trained models: A survey},\n        author={Zhou, Da-Wei and Sun, Hai-Long and Ning, Jingyi and Ye, Han-Jia and Zhan, De-Chuan},\n        booktitle={IJCAI},\n        pages={8363-8371},\n        year={2024}\n    }\n\n\n## What's New\n- [2026-01]🌟 We have released [C3Box](https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FC3Box), a CLIP-based Class-Incremental Learning Toolbox. Have a try!\n- [2025-07]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08510) on class-incremental learning with CLIP (**ICCV 2025**)!\n- [2025-07]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08165) on pre-trained model-based class-incremental learning (**ICCV 2025**)!\n- [2025-07]🌟 Check out our [latest work](https:\u002F\u002Fopenreview.net\u002Fforum?id=dwjwvTwV3V&noteId=HVZe95quYK) on domain-incremental learning with PTM (**ICML 2025**)!\n- [2025-04]🌟 Add [TagFex](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823). State-of-the-art method of 2025!\n- [2025-03]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823) on class-incremental learning (**CVPR 2025**)! \n- [2025-02]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00911) on pre-trained model-based domain-incremental learning (**CVPR 2025**)! \n- [2025-02]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19270) on class-incremental learning with vision-language models (**TPAMI 2025**)!\n- [2024-12]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09441) on pre-trained model-based class-incremental learning (**AAAI 2025**)!\n- [2024-08]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338) on pre-trained model-based class-incremental learning (**IJCV 2024**)!\n- [2024-07]🌟 Check out our [rigorous and unified survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03648) about class-incremental learning, which introduces some memory-agnostic measures with holistic evaluations from multiple aspects (**TPAMI 2024**)!\n- [2024-06]🌟 Check out our [work about all-layer margin in class-incremental learning](https:\u002F\u002Fopenreview.net\u002Fforum?id=aksdU1KOpT) (**ICML 2024**)!\n- [2024-03]🌟 Check out our [latest work](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12030) on pre-trained model-based class-incremental learning (**CVPR 2024**)!\n- [2024-01]🌟 Check out our [latest survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16386) on pre-trained model-based continual learning (**IJCAI 2024**)!\n- [2023-09]🌟 We have released [PILOT](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FLAMDA-PILOT) toolbox for class-incremental learning with pre-trained models. Have a try!\n- [2023-07]🌟 Add [MEMO](https:\u002F\u002Fopenreview.net\u002Fforum?id=S07feAlQHgM), [BEEF](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3), and [SimpleCIL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338). State-of-the-art methods of 2023!\n- [2022-12]🌟 Add FrTrIL, PASS, IL2A, and SSRE.\n- [2022-10]🌟 PyCIL has been published in [SCIENCE CHINA Information Sciences](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11432-022-3600-y). Check out the [official introduction](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fh1qu2LpdvjeHAPLOnG478A)!  \n- [2022-08]🌟 Add RMM.\n- [2022-07]🌟 Add [FOSTER](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662). State-of-the-art method with a single backbone!\n- [2021-12]🌟 **Call For Feedback**: We add a \u003Ca href=\"#Awesome-Papers-using-PyCIL\">section\u003C\u002Fa> to introduce awesome works using PyCIL. If you are using PyCIL to publish your work in  top-tier conferences\u002Fjournals, feel free to [contact us](mailto:zhoudw@lamda.nju.edu.cn) for details!\n- [2021-12]🌟 As team members are committed to other projects and in light of the intense demands of code reviews, **we will prioritize reviewing algorithms that have explicitly cited and implemented methods from our toolbox paper in their publications.** Please read the [PR policy](resources\u002FPR_policy.md) before submitting your code.\n\n## Introduction\n\nTraditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process. However, real-world applications often face the incoming new classes, and a model should incorporate them continually. The learning paradigm is called Class-Incremental Learning (CIL). We propose a Python toolbox that implements several key algorithms for class-incremental learning to ease the burden of researchers in the machine learning community. The toolbox contains implementations of a number of founding works of CIL, such as EWC and iCaRL, but also provides current state-of-the-art algorithms that can be used for conducting novel fundamental research. This toolbox, named PyCIL for Python Class-Incremental Learning, is open source with an MIT license.\n\nFor more information about incremental learning, you can refer to these reading materials:\n- A brief introduction (in Chinese) about CIL is available [here](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F490308909).\n- A PyTorch Tutorial to Class-Incremental Learning (with explicit codes and detailed explanations) is available [here](https:\u002F\u002Fgithub.com\u002FG-U-N\u002Fa-PyTorch-Tutorial-to-Class-Incremental-Learning).\n\n## Methods Reproduced\n\n-  `FineTune`: Baseline method which simply updates parameters on new tasks.\n-  `EWC`: Overcoming catastrophic forgetting in neural networks. PNAS2017 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00796)]\n-  `LwF`:  Learning without Forgetting. ECCV2016 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.09282)]\n-  `Replay`: Baseline method with exemplar replay.\n-  `GEM`: Gradient Episodic Memory for Continual Learning. NIPS2017 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08840)]\n-  `iCaRL`: Incremental Classifier and Representation Learning. CVPR2017 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.07725)]\n-  `BiC`: Large Scale Incremental Learning. CVPR2019 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.13260)]\n-  `WA`: Maintaining Discrimination and Fairness in Class Incremental Learning. CVPR2020 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07053)]\n-  `PODNet`: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. ECCV2020 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.13513)]\n-  `DER`: DER: Dynamically Expandable Representation for Class Incremental Learning. CVPR2021 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16788)]\n-  `PASS`: Prototype Augmentation and Self-Supervision for Incremental Learning. CVPR2021 [[paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)]\n-  `RMM`: RMM: Reinforced Memory Management for Class-Incremental Learning. NeurIPS2021 [[paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F1cbcaa5abbb6b70f378a3a03d0c26386-Abstract.html)]\n-  `IL2A`: Class-Incremental Learning via Dual Augmentation. NeurIPS2021 [[paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F77ee3bc58ce560b86c2b59363281e914-Paper.pdf)]\n-  `ACIL`: Analytic Class-Incremental Learning with Absolute Memorization and Privacy Protection. NeurIPS 2022 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14922)]\n-  `SSRE`: Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. CVPR2022 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06359)]\n-  `FeTrIL`: Feature Translation for Exemplar-Free Class-Incremental Learning. WACV2023 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13131)]\n-  `Coil`: Co-Transport for Class-Incremental Learning. ACM MM2021 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12654)]\n-  `FOSTER`: Feature Boosting and Compression for Class-incremental Learning. ECCV 2022 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662)]\n-  `MEMO`: A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning. ICLR 2023 Spotlight [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=S07feAlQHgM)]\n-  `BEEF`: BEEF: Bi-Compatible Class-Incremental Learning via Energy-Based Expansion and Fusion. ICLR 2023 [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3)]\n-  `DS-AL`: A Dual-Stream Analytic Learning for Exemplar-Free Class-Incremental Learning. AAAI 2024 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17503)]\n-  `SimpleCIL`: Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need. IJCV 2024 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)]\n-  `Aper`: Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need. IJCV 2024 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)]\n-  `TagFex`: Task-Agnostic Guided Feature Expansion for Class-Incremental Learning. CVPR 2025 [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823)]\n\n\n## Reproduced Results\n\n#### CIFAR-100\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_9c621ad431bb.png\" width=\"900px\">\n\u003C\u002Fdiv>\n\n\n#### ImageNet-100\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_d482fa9c07d0.png\" width=\"900px\">\n\u003C\u002Fdiv>\n\n#### ImageNet-100 (Top-5 Accuracy) \n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_6c81e6063d05.png\" width=\"500px\">\n\u003C\u002Fdiv>\n\n> More experimental details and results can be found in our [survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03648).\n\n## How To Use\n\n### Clone\n\nClone this GitHub repository:\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002FG-U-N\u002FPyCIL.git\ncd PyCIL\n```\n\n### Dependencies\n\n1. [torch 1.81](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch)\n2. [torchvision 0.6.0](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)\n3. [tqdm](https:\u002F\u002Fgithub.com\u002Ftqdm\u002Ftqdm)\n4. [numpy](https:\u002F\u002Fgithub.com\u002Fnumpy\u002Fnumpy)\n5. [scipy](https:\u002F\u002Fgithub.com\u002Fscipy\u002Fscipy)\n6. [quadprog](https:\u002F\u002Fgithub.com\u002Fquadprog\u002Fquadprog)\n7. [POT](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT)\n\n### Run experiment\n\n1. Edit the `[MODEL NAME].json` file for global settings.\n2. Edit the hyperparameters in the corresponding `[MODEL NAME].py` file (e.g., `models\u002Ficarl.py`).\n3. Run:\n\n```bash\npython main.py --config=.\u002Fexps\u002F[MODEL NAME].json\n```\n\nwhere [MODEL NAME] should be chosen from `finetune`, `ewc`, `lwf`, `replay`, `gem`,  `icarl`, `bic`, `wa`, `podnet`, `der`, etc.\n\n4. `hyper-parameters`\n\nWhen using PyCIL, you can edit the global parameters and algorithm-specific hyper-parameter in the corresponding json file.\n\nThese parameters include:\n\n- **memory-size**: The total exemplar number in the incremental learning process. Assuming there are $K$ classes at the current stage, the model will preserve $\\left[\\frac{memory-size}{K}\\right]$ exemplar per class.\n- **init-cls**: The number of classes in the first incremental stage. Since there are different settings in CIL with a different number of classes in the first stage, our framework enables different choices to define the initial stage.\n- **increment**: The number of classes in each incremental stage $i$, $i$ > 1. By default, the number of classes per incremental stage is equivalent per stage.\n- **convnet-type**: The backbone network for the incremental model. According to the benchmark setting, `ResNet32` is utilized for `CIFAR100`, and `ResNet18` is used for `ImageNet`.\n- **seed**: The random seed adopted for shuffling the class order. According to the benchmark setting, it is set to 1993 by default.\n\nOther parameters in terms of model optimization, e.g., batch size, optimization epoch, learning rate, learning rate decay, weight decay, milestone, and temperature, can be modified in the corresponding Python file.\n\n### Datasets\n\nWe have implemented the pre-processing of `CIFAR100`, `imagenet100,` and `imagenet1000`. When training on `CIFAR100`, this framework will automatically download it.  When training on `imagenet100\u002F1000`, you should specify the folder of your dataset in `utils\u002Fdata.py`.\n\n```python\n    def download_data(self):\n        assert 0,\"You should specify the folder of your dataset\"\n        train_dir = '[DATA-PATH]\u002Ftrain\u002F'\n        test_dir = '[DATA-PATH]\u002Fval\u002F'\n```\n[Here](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1RBrPGrZzd1bHU5YG8PjdfwpHANZR_lhJ?usp=sharing) is the file list of ImageNet100 (or say ImageNet-Sub).\n\n## Awesome Papers using PyCIL\n\n### Our Papers\n- External Knowledge Injection for CLIP-Based Class-Incremental Learning (**ICCV 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08510)] [[code](https:\u002F\u002Fgithub.com\u002FRenaissCode\u002FENGINE)]\n\n- Integrating Task-Specific and Universal Adapters for Pre-Trained Model-based Class-Incremental Learning (**ICCV 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08165)] [[code](https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FICCV2025-TUNA)]\n\n- Addressing Imbalanced Domain-Incremental Learning through Dual-Balance Collaborative Experts (**ICML 2025**) [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=dwjwvTwV3V&noteId=HVZe95quYK)] [[code](https:\u002F\u002Fgithub.com\u002FLain810\u002FDCE)]\n\n- Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning (**CVPR 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00911)] [[code](https:\u002F\u002Fgithub.com\u002FEstrella-fugaz\u002FCVPR25-Duct)]\n\n- Task-Agnostic Guided Feature Expansion for Class-Incremental Learning (**CVPR 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823)] [[code](https:\u002F\u002Fgithub.com\u002Fbwnzheng\u002FTagFex_CVPR2025)]\n  \n- Learning without Forgetting for Vision-Language Models (**TPAMI 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19270)] [[code](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FPROOF)]\n\n- Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need (**IJCV 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)] [[code](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FRevisitingCIL)]\n \n- PILOT: A Pre-Trained Model-Based Continual Learning Toolbox (**SCIS 2025**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07117)] [[code](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FLAMDA-PILOT)]\n\n- Class-Incremental Learning: A Survey (**TPAMI 2024**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03648)] [[code](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FCIL_Survey\u002F)]\n\n- Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning (**CVPR 2024**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12030 )] [[code](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FCVPR24-Ease)]\n\n- Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning (**ICML 2024**) [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=aksdU1KOpT)] [[code](https:\u002F\u002Fgithub.com\u002Fbwnzheng\u002FMRFA_ICML2024)]\n\n- Continual Learning with Pre-Trained Models: A Survey (**IJCAI 2024**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16386)] [[code](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FLAMDA-PILOT)]\n\n- Adaptive Adapter Routing for Long-Tailed Class-Incremental Learning (**Machine Learning 2024**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07446)] [[code](https:\u002F\u002Fgithub.com\u002Fvita-qzh\u002FAPART)]\n\n- Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration (**NeurIPS 2023**)[[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05229)] [[Code](https:\u002F\u002Fgithub.com\u002Fwangkiw\u002FTEEN)]\n\n- BEEF: Bi-Compatible Class-Incremental Learning via Energy-Based Expansion and Fusion (**ICLR 2023**) [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3)] [[code](https:\u002F\u002Fgithub.com\u002FG-U-N\u002FICLR23-BEEF\u002F)]\n\n- A model or 603 exemplars: Towards memory-efficient class-incremental learning (**ICLR 2023**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13218)] [[code](https:\u002F\u002Fgithub.com\u002Fwangkiw\u002FICLR23-MEMO\u002F)]\n\n- Few-shot class-incremental learning by sampling multi-phase tasks (**TPAMI 2022**) [[paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17030.pdf)] [[code](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FTPAMI-Limit)]\n\n- Foster: Feature Boosting and Compression for Class-incremental Learning (**ECCV 2022**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662)] [[code](https:\u002F\u002Fgithub.com\u002FG-U-N\u002FECCV22-FOSTER\u002F)]\n\n- Forward compatible few-shot class-incremental learning (**CVPR 2022**) [[paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_Forward_Compatible_Few-Shot_Class-Incremental_Learning_CVPR_2022_paper.pdf)] [[code](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FCVPR22-Fact)]\n\n- Co-Transport for Class-Incremental Learning (**ACM MM 2021**) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12654)] [[code](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FMM21-Coil)]\n\n### Other Awesome Works\n\n- Towards Realistic Evaluation of Industrial Continual Learning Scenarios with an Emphasis on Energy Consumption and Computational Footprint (**ICCV 2023**) [[paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FChavan_Towards_Realistic_Evaluation_of_Industrial_Continual_Learning_Scenarios_with_an_ICCV_2023_paper.pdf)][[code](https:\u002F\u002Fgithub.com\u002FVivek9Chavan\u002FRECIL)] \n\n- Dynamic Residual Classifier for Class Incremental Learning (**ICCV 2023**) [[paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FChen_Dynamic_Residual_Classifier_for_Class_Incremental_Learning_ICCV_2023_paper.pdf)][[code](https:\u002F\u002Fgithub.com\u002Fchen-xw\u002FDRC-CIL)] \n\n- S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning (**NeurIPS 2022**) [[paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZVe_WeMold)] [[code](https:\u002F\u002Fgithub.com\u002Fiamwangyabin\u002FS-Prompts)]\n\n- ...\n\n\n## License\n\nPlease check the MIT  [license](.\u002FLICENSE) that is listed in this repository.\n\n## Acknowledgments\n\nWe thank the following repos providing helpful components\u002Ffunctions in our work.\n\n- [Continual-Learning-Reproduce](https:\u002F\u002Fgithub.com\u002Fzhchuu\u002Fcontinual-learning-reproduce)\n- [GEM](https:\u002F\u002Fgithub.com\u002Fhursung1\u002FGradientEpisodicMemory)\n- [FACIL](https:\u002F\u002Fgithub.com\u002Fmmasana\u002FFACIL)\n\nThe training flow and data configurations are based on Continual-Learning-Reproduce. The original information of the repo is available in the base branch.\n\n\n## Contact\n\nIf there are any questions, please feel free to  propose new features by opening an issue or contact with the author: **Da-Wei Zhou**([zhoudw@lamda.nju.edu.cn](mailto:zhoudw@lamda.nju.edu.cn)) and **Fu-Yun Wang**(wangfuyun@smail.nju.edu.cn). Enjoy the code.\n\n\n## Star History 🚀\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_d3215abbf992.png)](https:\u002F\u002Fstar-history.com\u002F#G-U-N\u002FPyCIL&Date)\n\n","# PyCIL：面向类增量学习的 Python 工具箱\n\n---\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"#Introduction\">简介\u003C\u002Fa> •\n  \u003Ca href=\"#Methods-Reproduced\">已复现的方法\u003C\u002Fa> •\n  \u003Ca href=\"#Reproduced-Results\">复现结果\u003C\u002Fa> •  \n  \u003Ca href=\"#how-to-use\">使用方法\u003C\u002Fa> •\n  \u003Ca href=\"#license\">许可证\u003C\u002Fa> •\n  \u003Ca href=\"#Acknowledgments\">致谢\u003C\u002Fa> •\n  \u003Ca href=\"#Contact\">联系方式\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_f1edff7623d9.png\" width=\"800px\">\n\u003C\u002Fdiv>\n\n---\n\n\n\n\u003Cdiv align=\"center\">\n\n[![LICENSE](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-MIT-green?style=flat-square)](https:\u002F\u002Fgithub.com\u002Fyaoyao-liu\u002Fclass-incremental-learning\u002Fblob\u002Fmaster\u002FLICENSE)[![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.8-blue.svg?style=flat-square&logo=python&color=3776AB&logoColor=3776AB)](https:\u002F\u002Fwww.python.org\u002F) [![PyTorch](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpytorch-1.8-%237732a8?style=flat-square&logo=PyTorch&color=EE4C2C)](https:\u002F\u002Fpytorch.org\u002F) [![method](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReproduced-20-success)]() [![CIL](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FClassIncrementalLearning-SOTA-success??style=for-the-badge&logo=appveyor)](https:\u002F\u002Fpaperswithcode.com\u002Ftask\u002Fincremental-learning)\n![visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_d43c9891c46b.png)\n\n\u003C\u002Fdiv>\n\n欢迎使用 PyCIL，这或许是目前实现方法**最多**的类增量学习工具箱。本仓库是论文《PyCIL：面向类增量学习的 Python 工具箱》[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.12533) 的 PyTorch 代码库。如果您在工作中使用了本仓库中的任何内容，请引用以下 BibTeX 条目：\n\n    @article{zhou2023pycil,\n        author = {Da-Wei Zhou and Fu-Yun Wang and Han-Jia Ye and De-Chuan Zhan},\n        title = {PyCIL: a Python toolbox for class-incremental learning},\n        journal = {SCIENCE CHINA Information Sciences},\n        year = {2023},\n        volume = {66},\n        number = {9},\n        pages = {197101},\n        doi = {https:\u002F\u002Fdoi.org\u002F10.1007\u002Fs11432-022-3600-y}\n      }\n    \n    @article{zhou2024class,\n        author = {Zhou, Da-Wei and Wang, Qi-Wei and Qi, Zhi-Hong and Ye, Han-Jia and Zhan, De-Chuan and Liu, Ziwei},\n        title = {Class-Incremental Learning: A Survey},\n        journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n        volume={46},\n        number={12},\n        pages={9851--9873},\n        year = {2024}\n    }\n\n    @inproceedings{zhou2024continual,\n        title={Continual learning with pre-trained models: A survey},\n        author={Zhou, Da-Wei and Sun, Hai-Long and Ning, Jingyi and Ye, Han-Jia and Zhan, De-Chuan},\n        booktitle={IJCAI},\n        pages={8363-8371},\n        year={2024}\n    }\n\n\n## 最新动态\n- [2026-01]🌟 我们发布了 [C3Box](https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FC3Box)，一个基于 CLIP 的类增量学习工具箱。快来试试吧！\n- [2025-07]🌟 查看我们关于使用 CLIP 进行类增量学习的最新工作（**ICCV 2025**）！\n- [2025-07]🌟 查看我们关于基于预训练模型的类增量学习的最新工作（**ICCV 2025**）！\n- [2025-07]🌟 查看我们关于使用 PTM 进行领域增量学习的最新工作（**ICML 2025**）！\n- [2025-04]🌟 新增 [TagFex](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823)。这是 2025 年的最先进方法！\n- [2025-03]🌟 查看我们关于类增量学习的最新工作（**CVPR 2025**）！ \n- [2025-02]🌟 查看我们关于基于预训练模型的领域增量学习的最新工作（**CVPR 2025**）！ \n- [2025-02]🌟 查看我们关于使用视觉语言模型进行类增量学习的最新工作（**TPAMI 2025**）！\n- [2024-12]🌟 查看我们关于基于预训练模型的类增量学习的最新工作（**AAAI 2025**）！\n- [2024-08]🌟 查看我们关于基于预训练模型的类增量学习的最新工作（**IJCV 2024**）！\n- [2024-07]🌟 查看我们关于类增量学习的严谨且统一的综述（**TPAMI 2024**），其中引入了一些与记忆无关的度量，并从多个角度进行了全面评估！\n- [2024-06]🌟 查看我们关于类增量学习中全层边距的工作（**ICML 2024**）！\n- [2024-03]🌟 查看我们关于基于预训练模型的类增量学习的最新工作（**CVPR 2024**）！\n- [2024-01]🌟 查看我们关于基于预训练模型的持续学习的最新综述（**IJCAI 2024**）！\n- [2023-09]🌟 我们发布了 [PILOT](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FLAMDA-PILOT) 工具箱，用于基于预训练模型的类增量学习。快来试试吧！\n- [2023-07]🌟 新增 [MEMO](https:\u002F\u002Fopenreview.net\u002Fforum?id=S07feAlQHgM)、[BEEF](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3) 和 [SimpleCIL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)。这些都是 2023 年的最先进方法！\n- [2022-12]🌟 新增 FrTrIL、PASS、IL2A 和 SSRE。\n- [2022-10]🌟 PyCIL 已发表在《SCIENCE CHINA Information Sciences》上（https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11432-022-3600-y）。请查看[官方介绍](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fh1qu2LpdvjeHAPLOnG478A)！  \n- [2022-08]🌟 新增 RMM。\n- [2022-07]🌟 新增 [FOSTER](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662)。这是一种仅使用单一主干网络的最先进方法！\n- [2021-12]🌟 **征集反馈**：我们新增了一个\u003Ca href=\"#Awesome-Papers-using-PyCIL\">板块\u003C\u002Fa>,用于介绍使用 PyCIL 的优秀工作。如果您正在使用 PyCIL 在顶级会议或期刊上发表论文，请随时[联系我们](mailto:zhoudw@lamda.nju.edu.cn)了解详情！\n- [2021-12]🌟 由于团队成员正致力于其他项目，且代码评审需求十分紧迫，**我们将优先评审那些在其论文中明确引用并实现了我们工具箱论文中方法的算法。** 请在提交代码之前阅读[PR 政策](resources\u002FPR_policy.md)。\n\n## 简介\n\n传统的机器学习系统通常在封闭世界假设下部署，这意味着在离线训练过程中需要使用全部的训练数据。然而，在现实应用中，经常会遇到新类别的不断出现，模型需要能够持续地将这些新类别纳入其中。这种学习范式被称为类别增量学习（CIL）。我们提出了一款Python工具箱，实现了多种用于类别增量学习的关键算法，以减轻机器学习社区研究人员的工作负担。该工具箱不仅包含了EWC和iCaRL等类别增量学习领域的奠基性工作实现，还提供了当前最先进的算法，可用于开展新颖的基础研究。这款名为PyCIL（Python类别增量学习）的工具箱采用MIT许可证，完全开源。\n\n如需了解更多关于增量学习的信息，可以参考以下资料：\n- 关于CIL的简要中文介绍可参见[这里](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F490308909)。\n- 一个包含明确代码和详细解释的PyTorch类别增量学习教程可参见[这里](https:\u002F\u002Fgithub.com\u002FG-U-N\u002Fa-PyTorch-Tutorial-to-Class-Incremental-Learning)。\n\n## 复现的方法\n\n-  `FineTune`: 基线方法，仅在新任务上更新参数。\n-  `EWC`: 克服神经网络中的灾难性遗忘。PNAS2017 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00796)]\n-  `LwF`: 不忘旧知识的学习。ECCV2016 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.09282)]\n-  `Replay`: 基于样本回放的基线方法。\n-  `GEM`: 用于持续学习的梯度情节记忆。NIPS2017 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08840)]\n-  `iCaRL`: 增量分类器与表征学习。CVPR2017 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.07725)]\n-  `BiC`: 大规模增量学习。CVPR2019 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.13260)]\n-  `WA`: 在类别增量学习中保持判别能力和公平性。CVPR2020 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07053)]\n-  `PODNet`: PODNet：用于小任务增量学习的池化输出蒸馏。ECCV2020 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.13513)]\n-  `DER`: 用于类别增量学习的动态可扩展表征。CVPR2021 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16788)]\n-  `PASS`: 用于增量学习的原型增强与自监督。CVPR2021 [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)]\n-  `RMM`: 用于类别增量学习的强化内存管理。NeurIPS2021 [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F1cbcaa5abbb6b70f378a3a03d0c26386-Abstract.html)]\n-  `IL2A`: 通过双重增强实现类别增量学习。NeurIPS2021 [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F77ee3bc58ce560b86c2b59363281e914-Paper.pdf)]\n-  `ACIL`: 基于绝对记忆和隐私保护的解析式类别增量学习。NeurIPS 2022 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14922)]\n-  `SSRE`: 用于无样本类别增量学习的自我维持表征扩展。CVPR2022 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06359)]\n-  `FeTrIL`: 用于无样本类别增量学习的特征迁移。WACV2023 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13131)]\n-  `Coil`: 用于类别增量学习的协同传输。ACM MM2021 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12654)]\n-  `FOSTER`: 用于类别增量学习的特征增强与压缩。ECCV 2022 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662)]\n-  `MEMO`: 模型还是603个样本？迈向内存高效的类别增量学习。ICLR 2023 Spotlight [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=S07feAlQHgM)]\n-  `BEEF`: BEEF：基于能量扩张与融合的双兼容类别增量学习。ICLR 2023 [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3)]\n-  `DS-AL`: 用于无样本类别增量学习的双流解析学习。AAAI 2024 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.17503)]\n-  `SimpleCIL`: 重新审视预训练模型下的类别增量学习：泛化能力和适应性就是全部所需。IJCV 2024 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)]\n-  `Aper`: 重新审视预训练模型下的类别增量学习：泛化能力和适应性就是全部所需。IJCV 2024 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)]\n-  `TagFex`: 用于类别增量学习的任务无关引导特征扩展。CVPR 2025 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823)]\n\n\n## 复现的结果\n\n#### CIFAR-100\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_9c621ad431bb.png\" width=\"900px\">\n\u003C\u002Fdiv>\n\n\n#### ImageNet-100\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_d482fa9c07d0.png\" width=\"900px\">\n\u003C\u002Fdiv>\n\n#### ImageNet-100（Top-5准确率）\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_6c81e6063d05.png\" width=\"500px\">\n\u003C\u002Fdiv>\n\n> 更多实验细节和结果请参阅我们的[综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03648)。\n\n## 使用方法\n\n### 克隆\n\n克隆本GitHub仓库：\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002FG-U-N\u002FPyCIL.git\ncd PyCIL\n```\n\n### 依赖项\n\n1. [torch 1.81](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fpytorch)\n2. [torchvision 0.6.0](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fvision)\n3. [tqdm](https:\u002F\u002Fgithub.com\u002Ftqdm\u002Ftqdm)\n4. [numpy](https:\u002F\u002Fgithub.com\u002Fnumpy\u002Fnumpy)\n5. [scipy](https:\u002F\u002Fgithub.com\u002Fscipy\u002Fscipy)\n6. [quadprog](https:\u002F\u002Fgithub.com\u002Fquadprog\u002Fquadprog)\n7. [POT](https:\u002F\u002Fgithub.com\u002FPythonOT\u002FPOT)\n\n### 运行实验\n\n1. 编辑 `[模型名称].json` 文件以设置全局参数。\n2. 在对应的 `[模型名称].py` 文件中（例如 `models\u002Ficarl.py`）编辑超参数。\n3. 运行以下命令：\n\n```bash\npython main.py --config=.\u002Fexps\u002F[模型名称].json\n```\n\n其中 `[模型名称]` 应从 `finetune`、`ewc`、`lwf`、`replay`、`gem`、`icarl`、`bic`、`wa`、`podnet`、`der` 等中选择。\n\n4. **超参数**\n\n使用 PyCIL 时，您可以在相应的 JSON 文件中编辑全局参数和特定算法的超参数。\n\n这些参数包括：\n\n- **memory-size**：增量学习过程中的总示例数。假设当前阶段有 $K$ 个类别，则模型将为每个类别保留 $\\left[\\frac{memory-size}{K}\\right]$ 个示例。\n- **init-cls**：第一个增量阶段的类别数量。由于 CIL 中第一阶段的类别数量设置不同，我们的框架支持多种选择来定义初始阶段。\n- **increment**：每个增量阶段 $i$ 的类别数量，$i > 1$。默认情况下，每个增量阶段的类别数量相等。\n- **convnet-type**：增量模型的主干网络。根据基准设置，`CIFAR100` 使用 `ResNet32`，而 `ImageNet` 使用 `ResNet18`。\n- **seed**：用于打乱类别顺序的随机种子。根据基准设置，默认值为 1993。\n\n其他与模型优化相关的参数，如批量大小、优化轮数、学习率、学习率衰减、权重衰减、里程碑和温度，可在对应的 Python 文件中进行修改。\n\n### 数据集\n\n我们已实现了 `CIFAR100`、`imagenet100` 和 `imagenet1000` 的预处理。在训练 `CIFAR100` 时，该框架会自动下载数据集。而在训练 `imagenet100\u002F1000` 时，您需要在 `utils\u002Fdata.py` 中指定您的数据集文件夹路径。\n\n```python\n    def download_data(self):\n        assert 0,\"您应指定您的数据集文件夹\"\n        train_dir = '[DATA-PATH]\u002Ftrain\u002F'\n        test_dir = '[DATA-PATH]\u002Fval\u002F'\n```\n[此处](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1RBrPGrZzd1bHU5YG8PjdfwpHANZR_lhJ?usp=sharing) 是 ImageNet100（或称 ImageNet-Sub）的数据文件列表。\n\n## 使用 PyCIL 的优秀论文\n\n### 我们的论文\n- 基于 CLIP 的类增量学习中的外部知识注入（ICCV 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08510)] [[代码](https:\u002F\u002Fgithub.com\u002FRenaissCode\u002FENGINE)]\n  \n- 针对预训练模型的类增量学习中任务特定适配器与通用适配器的融合（ICCV 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08165)] [[代码](https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FICCV2025-TUNA)]\n\n- 通过双平衡协作专家解决不平衡领域增量学习问题（ICML 2025）[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=dwjwvTwV3V&noteId=HVZe95quYK)] [[代码](https:\u002F\u002Fgithub.com\u002FLain810\u002FDCE)]\n\n- 预训练模型驱动下的领域增量学习中的双重整合（CVPR 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.00911)] [[代码](https:\u002F\u002Fgithub.com\u002FEstrella-fugaz\u002FCVPR25-Duct)]\n\n- 面向类增量学习的任务无关引导特征扩展（CVPR 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00823)] [[代码](https:\u002F\u002Fgithub.com\u002Fbwnzheng\u002FTagFex_CVPR2025)]\n\n- 视觉-语言模型的无遗忘学习（TPAMI 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19270)] [[代码](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FPROOF)]\n\n- 重新审视基于预训练模型的类增量学习：泛化能力和适应性就是全部所需（IJCV 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)] [[代码](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FRevisitingCIL)]\n\n- PILOT：基于预训练模型的持续学习工具箱（SCIS 2025）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07117)] [[代码](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FLAMDA-PILOT)]\n\n- 类增量学习综述（TPAMI 2024）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03648)] [[代码](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FCIL_Survey\u002F)]\n\n- 预训练模型驱动下的可扩展子空间集成类增量学习（CVPR 2024）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.12030)] [[代码](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FCVPR24-Ease)]\n\n- 面向类增量学习的多层重演特征增强（ICML 2024）[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=aksdU1KOpT)] [[代码](https:\u002F\u002Fgithub.com\u002Fbwnzheng\u002FMRFA_ICML2024)]\n\n- 基于预训练模型的持续学习综述（IJCAI 2024）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16386)] [[代码](https:\u002F\u002Fgithub.com\u002Fsun-hailong\u002FLAMDA-PILOT)]\n\n- 长尾类增量学习中的自适应适配器路由（机器学习 2024）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.07446)] [[代码](https:\u002F\u002Fgithub.com\u002Fvita-qzh\u002FAPART)]\n\n- 无需训练的原型校准实现的小样本类增量学习（NeurIPS 2023）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05229)] [[代码](https:\u002F\u002Fgithub.com\u002Fwangkiw\u002FTEEN)]\n\n- BEEF：基于能量的扩展与融合实现的双向兼容类增量学习（ICLR 2023）[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3)] [[代码](https:\u002F\u002Fgithub.com\u002FG-U-N\u002FICLR23-BEEF\u002F)]\n\n- 一个模型还是 603 个示例？迈向内存高效的类增量学习（ICLR 2023）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13218)] [[代码](https:\u002F\u002Fgithub.com\u002Fwangkiw\u002FICLR23-MEMO\u002F)]\n\n- 通过采样多阶段任务实现的小样本类增量学习（TPAMI 2022）[[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.17030.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FTPAMI-Limit)]\n\n- FOSTER：面向类增量学习的特征增强与压缩（ECCV 2022）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662)] [[代码](https:\u002F\u002Fgithub.com\u002FG-U-N\u002FECCV22-FOSTER\u002F)]\n\n- 向前兼容的小样本类增量学习（CVPR 2022）[[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FZhou_Forward_Compatible_Few-Shot_Class-Incremental_Learning_CVPR_2022_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FCVPR22-Fact)]\n\n- 面向类增量学习的协同运输（ACM MM 2021）[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12654)] [[代码](https:\u002F\u002Fgithub.com\u002Fzhoudw-zdw\u002FMM21-Coil)]\n\n### 其他优秀作品\n\n- 面向工业持续学习场景的现实评估：重点关注能耗与计算开销（**ICCV 2023**）[[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FChavan_Towards_Realistic_Evaluation_of_Industrial_Continual_Learning_Scenarios_with_an_ICCV_2023_paper.pdf)][[代码](https:\u002F\u002Fgithub.com\u002FVivek9Chavan\u002FRECIL)] \n\n- 用于类别增量学习的动态残差分类器（**ICCV 2023**）[[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FChen_Dynamic_Residual_Classifier_for_Class_Incremental_Learning_ICCV_2023_paper.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fchen-xw\u002FDRC-CIL)] \n\n- 基于预训练Transformer的S-Prompts学习：面向领域增量学习的奥卡姆剃刀原则（**NeurIPS 2022**）[[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZVe_WeMold)][[代码](https:\u002F\u002Fgithub.com\u002Fiamwangyabin\u002FS-Prompts)]\n\n- ...\n\n\n## 许可证\n\n请查看本仓库中列出的 MIT  [许可证](.\u002FLICENSE)。\n\n## 致谢\n\n我们感谢以下项目在我们的工作中提供了有用的组件或功能。\n\n- [Continual-Learning-Reproduce](https:\u002F\u002Fgithub.com\u002Fzhchuu\u002Fcontinual-learning-reproduce)\n- [GEM](https:\u002F\u002Fgithub.com\u002Fhursung1\u002FGradientEpisodicMemory)\n- [FACIL](https:\u002F\u002Fgithub.com\u002Fmmasana\u002FFACIL)\n\n我们的训练流程和数据配置基于 Continual-Learning-Reproduce。该仓库的基础分支中包含了原始信息。\n\n\n## 联系方式\n\n如有任何问题，欢迎通过提交 issue 提出新功能建议，或直接联系作者：**Da-Wei Zhou**([zhoudw@lamda.nju.edu.cn](mailto:zhoudw@lamda.nju.edu.cn)) 和 **Fu-Yun Wang**(wangfuyun@smail.nju.edu.cn)。祝您使用愉快。\n\n\n## 星标历史 🚀\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_readme_d3215abbf992.png)](https:\u002F\u002Fstar-history.com\u002F#G-U-N\u002FPyCIL&Date)","# PyCIL 快速上手指南\n\nPyCIL 是一个基于 PyTorch 的类增量学习（Class-Incremental Learning, CIL）Python 工具箱，集成了包括 EWC、iCaRL、FOSTER、SimpleCIL 等在内的 20+ 种经典及最先进（SOTA）算法，旨在降低研究人员复现和探索增量学习算法的门槛。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux \u002F macOS \u002F Windows\n*   **Python 版本**：3.8 及以上\n*   **核心框架**：\n    *   PyTorch >= 1.8\n    *   torchvision >= 0.6.0\n*   **其他依赖库**：\n    *   `tqdm`, `numpy`, `scipy`\n    *   `quadprog` (用于 GEM 等方法)\n    *   `POT` (Python Optimal Transport，用于部分传输类方法)\n\n> **提示**：国内用户建议使用清华或阿里镜像源加速 pip 包安装。\n\n## 安装步骤\n\n### 1. 克隆项目\n首先从 GitHub 克隆代码仓库：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FG-U-N\u002FPyCIL.git\ncd PyCIL\n```\n\n### 2. 安装依赖\n推荐使用 `pip` 安装所需依赖。为确保下载速度，可使用国内镜像源：\n\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n如果项目中未提供 `requirements.txt`，请手动安装核心依赖：\n\n```bash\npip install torch>=1.8 torchvision>=0.6.0 tqdm numpy scipy quadprog POT -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：`quadprog` 在 Windows 下可能需要预编译的二进制包，若安装失败，请前往 [Gohlke 网站](https:\u002F\u002Fwww.lfd.uci.edu\u002F~gohlke\u002Fpythonlibs\u002F#quadprog) 下载对应版本的 `.whl` 文件进行安装。\n\n## 基本使用\n\nPyCIL 通过配置文件和命令行参数来运行实验。以下是运行一个标准类增量学习实验的最简流程。\n\n### 1. 配置模型参数\n在 `exps\u002F` 目录下找到对应模型的 JSON 配置文件（例如 `finetune.json` 或 `icarl.json`）。\n*   **全局设置**：编辑 `[MODEL NAME].json` 文件，设置数据集路径、增量阶段数、随机种子等全局参数。\n*   **超参数微调**：如需修改特定算法的超参数，可直接编辑 `models\u002F[MODEL NAME].py` 文件。\n\n### 2. 运行实验\n使用 `main.py` 启动训练，指定配置文件路径：\n\n```bash\npython main.py --config=.\u002Fexps\u002Ficarl.json\n```\n\n将 `icarl` 替换为您想要复现的其他算法名称（如 `finetune`, `ewc`, `foster`, `simplecil` 等）。\n\n### 3. 查看结果\n程序运行结束后，日志和结果通常会保存在 `logs\u002F` 或项目根目录下的指定输出文件夹中，包含各阶段的准确率及最终平均准确率。\n\n---\n**引用说明**：如果您在研究中使用了 PyCIL，请务必引用以下论文：\n```bibtex\n@article{zhou2023pycil,\n  title={PyCIL: a Python toolbox for class-incremental learning},\n  author={Zhou, Da-Wei and Wang, Fu-Yun and Ye, Han-Jia and Zhan, De-Chuan},\n  journal={SCIENCE CHINA Information Sciences},\n  year={2023}\n}\n```","某自动驾驶初创公司的算法团队需要让车辆识别系统在不断新增交通标志类别（如从仅识别限速牌扩展到识别施工警示、临时改道牌）时，无需重新训练整个模型即可持续学习。\n\n### 没有 PyCIL 时\n- **复现成本极高**：团队需手动查找并复现论文代码，不同增量学习算法（如 iCaRL、DER）的数据加载与训练逻辑差异巨大，耗费数周时间搭建基础框架。\n- **评估标准混乱**：缺乏统一的评测脚本，每次对比新算法时，因数据划分或指标计算方式不一致，导致实验结果无法横向公平比较。\n- **灾难性遗忘难控**：自研的简单微调策略导致模型在学会新标志后，迅速遗忘旧类别的特征，识别准确率断崖式下跌，且难以快速定位是哪种机制失效。\n- **扩展性差**：每引入一种新 SOTA 方法都需要重构大量代码，难以快速验证多种策略组合以应对实际路测中的长尾分布问题。\n\n### 使用 PyCIL 后\n- **开箱即用**：直接调用 PyCIL 内置的 20+ 种复现算法（包括最新的 TagFex 等），通过统一接口即可在几分钟内切换不同策略进行训练，研发周期缩短 80%。\n- **标准化评测**：利用工具自带的标准化评估模块，确保所有实验在相同的数据增量设置和指标体系下运行，快速产出可信的对比报告。\n- **有效抑制遗忘**：借助 PyCIL 中成熟的回放机制与正则化方法，模型在学习新交通标志时显著保留了对旧类别的记忆，整体平均准确率大幅提升。\n- **灵活迭代**：基于模块化设计，团队能轻松组合不同骨干网络与增量策略，快速针对特定路况数据定制最优方案，加速模型落地部署。\n\nPyCIL 将原本繁琐的学术算法复现工作转化为标准化的工程流程，让团队能专注于解决自动驾驶场景下的真实数据挑战而非重复造轮子。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FLAMDA-CL_PyCIL_f1edff76.png","LAMDA-CL","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FLAMDA-CL_e5ec881a.png","LAMDA group, Continual Learning Lab",null,"https:\u002F\u002Fgithub.com\u002FLAMDA-CL",[78],{"name":79,"color":80,"percentage":81},"Python","#3572A5",100,1073,159,"2026-04-16T17:59:27","NOASSERTION","未说明","未说明 (基于 PyTorch，通常建议配备 NVIDIA GPU 以加速训练)",{"notes":89,"python":90,"dependencies":91},"README 中明确列出的依赖版本较旧（PyTorch 1.8.1），在实际运行时可能需要根据硬件环境调整版本兼容性。运行实验前需编辑 JSON 配置文件和对应的模型 Python 文件来设置超参数。","3.8+",[92,93,94,95,96,97,98],"torch==1.8.1","torchvision==0.6.0","tqdm","numpy","scipy","quadprog","POT",[100,14],"其他",[102,103,104,105,106,107,108,109,110,111],"incremental-learning","lifelong-learning","continual-learning","machine-learning","reproducible-research","deep-learning","pytorch","open-environment-recognition","open-world","representation-learning","2026-03-27T02:49:30.150509","2026-04-18T14:15:04.546821",[115,120,125,130,135,139,143],{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},39812,"为什么无法复现论文中的结果（如 BiC 或 iCaRL）？","默认配置中偏差调整（bias tuning）的 epoch 数量可能过高，导致新类别的性能较差。您可以修改相关参数以获得更好的性能。此外，请检查是否使用了单 GPU 训练，这可能与多 GPU 环境下的默认行为不同。具体代码位置参考：models\u002Fbic.py 第 177 行附近。","https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FPyCIL\u002Fissues\u002F50",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},39813,"为什么 FeTrIL 模型的准确率非常低？","通常是因为将 BatchNorm 层的 `track_running_stats` 设置为了 `False`。这会导致 `running_mean` 和 `running_var` 被冻结，使得从头开始训练 CNN 变得非常困难。建议将其设置为 `True` 或移除该设置，以允许统计信息正常更新。","https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FPyCIL\u002Fissues\u002F22",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},39814,"使用 BiC 方法时出现 \"ValueError: attempt to get argmin of an empty sequence\" 错误怎么办？","当每个类别的最小样本数低于平均示例数（exemplar）时会出现此问题。解决方法是在配置文件中将 `fixed_memory` 设置为 `True`。此外，如果是自定义数据集，请检查数据加载是否正确，可以在 `data.py` 的 `download_data` 方法中添加打印语句来验证 `train_data` 和 `train_targets` 的形状。","https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FPyCIL\u002Fissues\u002F41",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},39815,"如何在 PyCIL 中加载自定义模型或权重进行增量训练？","框架本身没有直接提供加载任意权重的简单开关。如果需要加载自己的模型或在特定阶段（如 0-7 类训练后）继续训练，您需要学习如何编写和实现 PyTorch 代码，并直接修改源代码来实现权重的加载和后续任务的增量训练逻辑。","https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FPyCIL\u002Fissues\u002F77",{"id":136,"question_zh":137,"answer_zh":138,"source_url":134},39816,"初始训练阶段（Base Session）是否包含增量学习方法？准确率是如何计算的？","初始训练阶段通常不包含增量方法，是标准的监督训练。关于准确率计算，主要有两种方式：NME（最近均值分类器，iCaRL 论文中提到）和普通准确率（输出最大概率）。代码中可能存在 `weight_align` 步骤在第一次任务评估前执行的情况，但这通常不影响第一阶段的基准准确率计算。",{"id":140,"question_zh":141,"answer_zh":142,"source_url":134},39817,"CIFAR-10 数据集的 memory_size 应该设置为多少？","Memory size 的设置取决于具体的实验协议和类别数量。对于 CIFAR-10 这样类别较少（10 类）的数据集，如果采用固定总内存策略（如总共 2000 个样本），每类分配的样本会很多；如果采用每类固定样本策略，则需根据 `memory_per_class` 计算。建议参考官方配置文件（如 exps\u002F 目录下的 json 文件）中针对类似规模数据集的设置，并根据 `fixed_memory` 参数决定是限制总数还是每类数量。",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},39818,"论文中报告的复现结果与代码运行结果不一致（例如 iCaRL 在 10 步设置下准确率偏低）的原因是什么？","这可能是由于社区中存在不同的实验协议（protocols）。例如，关于内存管理是固定总样本数（fixed_memory=False, 总大小固定）还是固定每类样本数（fixed_memory=True），不同的设置会导致结果差异。请仔细核对您的 JSON 配置文件中的 `fixed_memory`、`memory_size` 和 `memory_per_class` 设置是否与论文实验部分描述的一致。","https:\u002F\u002Fgithub.com\u002FLAMDA-CL\u002FPyCIL\u002Fissues\u002F4",[149,154],{"id":150,"version":151,"summary_zh":152,"released_at":153},315720,"v0.2.1","在本次更新中，我们支持2023年的最先进方法：\n\n-  `MEMO`：一个由603个范例组成的模型——迈向内存高效的类别增量学习。ICLR 2023 Spotlight [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=S07feAlQHgM)]\n-  `BEEF`：BEEF：基于能量的扩展与融合实现双向兼容的类别增量学习。ICLR 2023 [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=iP77_axu0h3)]\n-  `SimpleCIL`：重新审视基于预训练模型的类别增量学习——泛化能力和自适应性足矣。arXiv 2023 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07338)]\n\n\n\n  \nCheers!\n\n","2023-07-13T15:37:31",{"id":155,"version":156,"summary_zh":157,"released_at":158},315721,"v0.1","这是 PyCIL 的首个 GitHub 发布版本。所复现的方法如下：\n\n- `FineTune`：基准方法，仅在新任务上更新模型参数，但会遭受灾难性遗忘问题。默认情况下，不会更新与先前类别输出相关的权重。\n-  `EWC`：克服神经网络中的灾难性遗忘。PNAS 2017 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00796)]\n-  `LwF`：无遗忘学习。ECCV 2016 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.09282)]\n-  `Replay`：使用外存样例的基准方法。\n-  `GEM`：用于持续学习的梯度情景记忆。NIPS 2017 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08840)]\n-  `iCaRL`：增量分类与表征学习。CVPR 2017 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.07725)]\n-  `BiC`：大规模增量学习。CVPR 2019 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.13260)]\n- `WA`：在类别增量学习中保持判别能力和公平性。CVPR 2020 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07053)]\n- `PODNet`：PODNet：面向小任务增量学习的池化输出蒸馏。ECCV 2020 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.13513)]\n- `DER`：用于类别增量学习的动态可扩展表征。CVPR 2021 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16788)]\n-  `PASS`：原型增强与自监督用于增量学习。CVPR 2021 [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)]\n- `RMM`：用于类别增量学习的强化内存管理。NeurIPS 2021 [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002F1cbcaa5abbb6b70f378a3a03d0c26386-Abstract.html)]\n- `IL2A`：通过双重增强实现类别增量学习。NeurIPS 2021 [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002F77ee3bc58ce560b86c2b59363281e914-Paper.pdf)]\n-  `SSRE`：用于无外存样例类别增量学习的自我维持表征扩展。CVPR 2022 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.06359)]\n-  `FeTrIL`：面向无外存样例类别增量学习的特征迁移。WACV 2023 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13131)]\n-  `Coil`：用于类别增量学习的协同传输。ACM MM 2021 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12654)]\n-  `FOSTER`：用于类别增量学习的特征增强与压缩。ECCV 2022 [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.04662)]\n\n敬请期待 PyCIL 中更多最先进方法的更新！","2022-12-12T09:06:34"]