[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-adapter-hub--adapters":3,"tool-adapter-hub--adapters":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":74,"owner_website":78,"owner_url":79,"languages":80,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":32,"env_os":76,"env_gpu":100,"env_ram":101,"env_deps":102,"category_tags":108,"github_topics":109,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":149},5694,"adapter-hub\u002Fadapters","adapters","A Unified Library for Parameter-Efficient and Modular Transfer Learning ","Adapters 是一个专为高效参数迁移学习设计的统一库，作为 HuggingFace Transformers 的增强插件，它让开发者能以极低的代码成本，在超过 20 种主流 Transformer 模型中集成 10 多种适配器方法。\n\n在大模型时代，全量微调往往消耗巨大的计算资源且难以管理。Adapters 核心解决了这一痛点，通过“参数高效微调”技术，让用户无需更新模型全部参数，仅训练少量新增模块即可适配新任务。这不仅大幅降低了显存需求和训练时间，还支持将针对不同任务训练的模块灵活合并或组合，实现了真正的模块化学习。\n\n该工具非常适合 NLP 领域的研究人员与算法工程师，尤其是那些希望在有限算力下进行大模型定制、探索前沿微调策略（如 Q-LoRA 量化训练）或需要频繁切换多任务场景的团队。其独特亮点在于提供了高度统一的接口，支持从简单的 LoRA 配置到复杂的任务算术合并，甚至允许用户像搭积木一样通过“组合块”灵活编排多个适配器。无论是快速加载预训练适配器进行推理，还是在现有模型架构上无缝添加新模块，Adapters 都能以简洁的代码流程，助力用户轻松开展高效的自然语言处理研究与","Adapters 是一个专为高效参数迁移学习设计的统一库，作为 HuggingFace Transformers 的增强插件，它让开发者能以极低的代码成本，在超过 20 种主流 Transformer 模型中集成 10 多种适配器方法。\n\n在大模型时代，全量微调往往消耗巨大的计算资源且难以管理。Adapters 核心解决了这一痛点，通过“参数高效微调”技术，让用户无需更新模型全部参数，仅训练少量新增模块即可适配新任务。这不仅大幅降低了显存需求和训练时间，还支持将针对不同任务训练的模块灵活合并或组合，实现了真正的模块化学习。\n\n该工具非常适合 NLP 领域的研究人员与算法工程师，尤其是那些希望在有限算力下进行大模型定制、探索前沿微调策略（如 Q-LoRA 量化训练）或需要频繁切换多任务场景的团队。其独特亮点在于提供了高度统一的接口，支持从简单的 LoRA 配置到复杂的任务算术合并，甚至允许用户像搭积木一样通过“组合块”灵活编排多个适配器。无论是快速加载预训练适配器进行推理，还是在现有模型架构上无缝添加新模块，Adapters 都能以简洁的代码流程，助力用户轻松开展高效的自然语言处理研究与开发。","\u003C!---\nCopyright 2020 The AdapterHub Team. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n\u003Cp align=\"center\">\n\u003Cimg style=\"vertical-align:middle\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fadapter-hub_adapters_readme_6eaedf6ee7fb.png\" width=\"80\" \u002F>\n\u003C\u002Fp>\n\u003Ch1 align=\"center\">\n\u003Cspan>\u003Ci>Adapters\u003C\u002Fi>\u003C\u002Fspan>\n\u003C\u002Fh1>\n\n\u003Ch3 align=\"center\">\nA Unified Library for Parameter-Efficient and Modular Transfer Learning\n\u003C\u002Fh3>\n\u003Ch3 align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fadapterhub.ml\">Website\u003C\u002Fa>\n    &nbsp; • &nbsp;\n    \u003Ca href=\"https:\u002F\u002Fdocs.adapterhub.ml\">Documentation\u003C\u002Fa>\n    &nbsp; • &nbsp;\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11077\">Paper\u003C\u002Fa>\n\u003C\u002Fh3>\n\n![Tests](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Fworkflows\u002FTests\u002Fbadge.svg)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fadapter-hub\u002Fadapters.svg?color=blue)](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fblob\u002Fmain\u002FLICENSE)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fadapters)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fadapters\u002F)\n\n_Adapters_ is an add-on library to [HuggingFace's Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers), integrating [10+ adapter methods](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html) into [20+ state-of-the-art Transformer models](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmodel_overview.html) with minimal coding overhead for training and inference.\n\n_Adapters_ provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. [Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FQLoRA_Llama_Finetuning.ipynb)), [adapter merging via task arithmetics](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html#merging-adapters) or the composition of multiple adapters via [composition blocks](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html), allowing advanced research in parameter-efficient transfer learning for NLP tasks.\n\n> **Note**: The _Adapters_ library has replaced the [`adapter-transformers`](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapter-transformers-legacy) package. All previously trained adapters are compatible with the new library. For transitioning, please read: https:\u002F\u002Fdocs.adapterhub.ml\u002Ftransitioning.html.\n\n\n## Installation\n\n`adapters` currently supports **Python 3.9+** and **PyTorch 2.0+**.\nAfter [installing PyTorch](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F), you can install `adapters` from PyPI ...\n\n```\npip install -U adapters\n```\n\n... or from source by cloning the repository:\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters.git\ncd adapters\npip install .\n```\n\n\n## Quick Tour\n\n#### Load pre-trained adapters:\n\n```python\nfrom adapters import AutoAdapterModel\nfrom transformers import AutoTokenizer\n\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\n\nmodel.load_adapter(\"AdapterHub\u002Froberta-base-pf-imdb\", source=\"hf\", set_active=True)\n\nprint(model(**tokenizer(\"This works great!\", return_tensors=\"pt\")).logits)\n```\n\n**[Learn More](https:\u002F\u002Fdocs.adapterhub.ml\u002Floading.html)**\n\n#### Adapt existing model setups:\n\n```python\nimport adapters\nfrom transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\"t5-base\")\n\nadapters.init(model)\n\nmodel.add_adapter(\"my_lora_adapter\", config=\"lora\")\nmodel.train_adapter(\"my_lora_adapter\")\n\n# Your regular training loop...\n```\n\n**[Learn More](https:\u002F\u002Fdocs.adapterhub.ml\u002Fquickstart.html)**\n\n#### Flexibly configure adapters:\n\n```python\nfrom adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel\n\nmodel = AutoAdapterModel.from_pretrained(\"microsoft\u002Fdeberta-v3-base\")\n\nadapter_config = ConfigUnion(\n    PrefixTuningConfig(prefix_length=20),\n    ParBnConfig(reduction_factor=4),\n)\nmodel.add_adapter(\"my_adapter\", config=adapter_config, set_active=True)\n```\n\n**[Learn More](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html)**\n\n#### Easily compose adapters in a single model:\n\n```python\nfrom adapters import AdapterSetup, AutoAdapterModel\nimport adapters.composition as ac\n\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\n\nqc = model.load_adapter(\"AdapterHub\u002Froberta-base-pf-trec\")\nsent = model.load_adapter(\"AdapterHub\u002Froberta-base-pf-imdb\")\n\nwith AdapterSetup(ac.Parallel(qc, sent)):\n    print(model(**tokenizer(\"What is AdapterHub?\", return_tensors=\"pt\")))\n```\n\n**[Learn More](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html)**\n\n## Useful Resources\n\nHuggingFace's great documentation on getting started with _Transformers_ can be found [here](https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002Findex.html). `adapters` is fully compatible with _Transformers_.\n\nTo get started with adapters, refer to these locations:\n\n- **[Colab notebook tutorials](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Ftree\u002Fmain\u002Fnotebooks)**, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub\n- **https:\u002F\u002Fdocs.adapterhub.ml**, our documentation on training and using adapters with _adapters_\n- **https:\u002F\u002Fadapterhub.ml** to explore available pre-trained adapter modules and share your own adapters\n- **[Examples folder](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Ftree\u002Fmain\u002Fexamples\u002Fpytorch)** of this repository containing HuggingFace's example training scripts, many adapted for training adapters\n\n## Implemented Methods\n\nCurrently, adapters integrates all architectures and methods listed below:\n\n| Method | Paper(s) | Quick Links |\n| --- | --- | --- |\n| Bottleneck adapters | [Houlsby et al. (2019)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.00751.pdf)\u003Cbr> [Bapna and Firat (2019)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.08478.pdf)\u003Cbr> [Steitz and Roth (2024)](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FSteitz_Adapters_Strike_Back_CVPR_2024_paper.pdf) | [Quickstart](https:\u002F\u002Fdocs.adapterhub.ml\u002Fquickstart.html), [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F01_Adapter_Training.ipynb) |\n| AdapterFusion | [Pfeiffer et al. (2021)](https:\u002F\u002Faclanthology.org\u002F2021.eacl-main.39.pdf) | [Docs: Training](https:\u002F\u002Fdocs.adapterhub.ml\u002Ftraining.html#train-adapterfusion), [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F03_Adapter_Fusion.ipynb) |\n| MAD-X,\u003Cbr> Invertible adapters | [Pfeiffer et al. (2020)](https:\u002F\u002Faclanthology.org\u002F2020.emnlp-main.617\u002F) | [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F04_Cross_Lingual_Transfer.ipynb) |\n| AdapterDrop | [Rücklé et al. (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.11918.pdf) | [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F05_Adapter_Drop_Training.ipynb) |\n| MAD-X 2.0,\u003Cbr> Embedding training | [Pfeiffer et al. (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.15562.pdf) | [Docs: Embeddings](https:\u002F\u002Fdocs.adapterhub.ml\u002Fembeddings.html), [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F08_NER_Wikiann.ipynb) |\n| Prefix Tuning | [Li and Liang (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.00190.pdf) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#prefix-tuning) |\n| Parallel adapters,\u003Cbr> Mix-and-Match adapters | [He et al. (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04366.pdf) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethod_combinations.html#mix-and-match-adapters) |\n| Compacter | [Mahabadi et al. (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.04647.pdf) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#compacter) |\n| LoRA | [Hu et al. (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09685.pdf) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#lora) |\n| MTL-LoRA | [Yang et al., 2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.09437) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmulti_task_methods.html#mtl-lora) |\n| (IA)^3 | [Liu et al. (2022)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.05638.pdf) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#ia-3) |\n| Vera | [Kopiczko et al., 2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.11454) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#vera)\n| DoRA | [Liu et al., 2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.09353) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#dora)\n| UniPELT | [Mao et al. (2022)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07577.pdf) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethod_combinations.html#unipelt) |\n| Prompt Tuning | [Lester et al. (2021)](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.243\u002F) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#prompt-tuning) |\n| QLoRA | [Dettmers et al. (2023)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.14314.pdf) | [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FQLoRA_Llama_Finetuning.ipynb) |\n| ReFT | [Wu et al. (2024)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.03592) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#reft) |\n| Adapter Task Arithmetics | [Chronopoulou et al. (2023)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.09344)\u003Cbr> [Zhang et al. (2023)](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F299a08ee712d4752c890938da99a77c6-Abstract-Conference.html) | [Docs](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmerging_adapters.html), [Notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F06_Task_Arithmetics.ipynb)|\n\n\n## Supported Models\n\nWe currently support the PyTorch versions of all models listed on the **[Model Overview](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmodel_overview.html) page** in our documentation.\n\n## Developing & Contributing\n\nTo get started with developing on _Adapters_ yourself and learn more about ways to contribute, please see https:\u002F\u002Fdocs.adapterhub.ml\u002Fcontributing.html.\n\n## Citation\n\nIf you use _Adapters_ in your work, please consider citing our library paper: [Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11077)\n\n```\n@inproceedings{poth-etal-2023-adapters,\n    title = \"Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning\",\n    author = {Poth, Clifton  and\n      Sterz, Hannah  and\n      Paul, Indraneil  and\n      Purkayastha, Sukannya  and\n      Engl{\\\"a}nder, Leon  and\n      Imhof, Timo  and\n      Vuli{\\'c}, Ivan  and\n      Ruder, Sebastian  and\n      Gurevych, Iryna  and\n      Pfeiffer, Jonas},\n    booktitle = \"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations\",\n    month = dec,\n    year = \"2023\",\n    address = \"Singapore\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2023.emnlp-demo.13\",\n    pages = \"149--160\",\n}\n```\n\nAlternatively, for the predecessor `adapter-transformers`, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper: [AdapterHub: A Framework for Adapting Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07779)\n\n```\n@inproceedings{pfeiffer2020AdapterHub,\n    title={AdapterHub: A Framework for Adapting Transformers},\n    author={Pfeiffer, Jonas and\n            R{\\\"u}ckl{\\'e}, Andreas and\n            Poth, Clifton and\n            Kamath, Aishwarya and\n            Vuli{\\'c}, Ivan and\n            Ruder, Sebastian and\n            Cho, Kyunghyun and\n            Gurevych, Iryna},\n    booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},\n    pages={46--54},\n    year={2020}\n}\n```\n","\u003C!---\n版权所有 © 2020 AdapterHub 团队。保留所有权利。\n\n根据 Apache License, Version 2.0（“许可证”）授权；除非符合许可证的规定，否则不得使用本文件。\n您可以在以下网址获取许可证副本：\n\n    http:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0\n\n除非适用法律要求或书面同意，否则软件按“原样”分发，不提供任何形式的明示或暗示的保证或条件。\n有关权限和限制的具体内容，请参阅许可证。\n-->\n\n\u003Cp align=\"center\">\n\u003Cimg style=\"vertical-align:middle\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fadapter-hub_adapters_readme_6eaedf6ee7fb.png\" width=\"80\" \u002F>\n\u003C\u002Fp>\n\u003Ch1 align=\"center\">\n\u003Cspan>\u003Ci>Adapters\u003C\u002Fi>\u003C\u002Fspan>\n\u003C\u002Fh1>\n\n\u003Ch3 align=\"center\">\n用于参数高效且模块化迁移学习的统一库\n\u003C\u002Fh3>\n\u003Ch3 align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fadapterhub.ml\">官网\u003C\u002Fa>\n    &nbsp; • &nbsp;\n    \u003Ca href=\"https:\u002F\u002Fdocs.adapterhub.ml\">文档\u003C\u002Fa>\n    &nbsp; • &nbsp;\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11077\">论文\u003C\u002Fa>\n\u003C\u002Fh3>\n\n![测试](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Fworkflows\u002FTests\u002Fbadge.svg)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fadapter-hub\u002Fadapters.svg?color=blue)](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fblob\u002Fmain\u002FLICENSE)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fadapters)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fadapters\u002F)\n\n_Adapters_ 是 [HuggingFace's Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) 的一个附加库，它将 [10 多种适配器方法](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html) 集成到 [20 多种最先进的 Transformer 模型](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmodel_overview.html) 中，同时在训练和推理过程中只需极少的代码开销。\n\n_Adapters_ 提供了一个统一的接口，用于高效的微调和模块化的迁移学习，支持多种功能，例如全精度或量化训练（如 [Q-LoRA、Q-Bottleneck Adapters 或 Q-PrefixTuning](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FQLoRA_Llama_Finetuning.ipynb)）、通过任务算术进行 [适配器合并](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html#merging-adapters)，或者通过 [组合块](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html) 组合多个适配器，从而为自然语言处理任务中的参数高效迁移学习研究提供了可能。\n\n> **注意**：_Adapters_ 库已取代了 [`adapter-transformers`](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapter-transformers-legacy) 包。所有先前训练好的适配器都与新库兼容。有关迁移说明，请参阅：https:\u002F\u002Fdocs.adapterhub.ml\u002Ftransitioning.html。\n\n\n## 安装\n\n`adapters` 目前支持 **Python 3.9+** 和 **PyTorch 2.0+**。\n在 [安装 PyTorch](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) 后，您可以从 PyPI 安装 `adapters` ...\n\n```\npip install -U adapters\n```\n\n... 或者通过克隆仓库从源代码安装：\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters.git\ncd adapters\npip install .\n```\n\n\n## 快速入门\n\n#### 加载预训练适配器：\n\n```python\nfrom adapters import AutoAdapterModel\nfrom transformers import AutoTokenizer\n\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\n\nmodel.load_adapter(\"AdapterHub\u002Froberta-base-pf-imdb\", source=\"hf\", set_active=True)\n\nprint(model(**tokenizer(\"This works great!\", return_tensors=\"pt\")).logits)\n```\n\n**[了解更多](https:\u002F\u002Fdocs.adapterhub.ml\u002Floading.html)**\n\n#### 适应现有模型设置：\n\n```python\nimport adapters\nfrom transformers import AutoModelForSequenceClassification\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\"t5-base\")\n\nadapters.init(model)\n\nmodel.add_adapter(\"my_lora_adapter\", config=\"lora\")\nmodel.train_adapter(\"my_lora_adapter\")\n\n# 您的常规训练循环...\n```\n\n**[了解更多](https:\u002F\u002Fdocs.adapterhub.ml\u002Fquickstart.html)**\n\n#### 灵活配置适配器：\n\n```python\nfrom adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel\n\nmodel = AutoAdapterModel.from_pretrained(\"microsoft\u002Fdeberta-v3-base\")\n\nadapter_config = ConfigUnion(\n    PrefixTuningConfig(prefix_length=20),\n    ParBnConfig(reduction_factor=4),\n)\nmodel.add_adapter(\"my_adapter\", config=adapter_config, set_active=True)\n```\n\n**[了解更多](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html)**\n\n#### 轻松在一个模型中组合适配器：\n\n```python\nfrom adapters import AdapterSetup, AutoAdapterModel\nimport adapters.composition as ac\n\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\n\nqc = model.load_adapter(\"AdapterHub\u002Froberta-base-pf-trec\")\nsent = model.load_adapter(\"AdapterHub\u002Froberta-base-pf-imdb\")\n\nwith AdapterSetup(ac.Parallel(qc, sent)):\n    print(model(**tokenizer(\"What is AdapterHub?\", return_tensors=\"pt\")))\n```\n\n**[了解更多](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html)**\n\n## 有用资源\n\nHuggingFace 关于如何开始使用 _Transformers_ 的优秀文档可以在这里找到：[https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002Findex.html](https:\u002F\u002Fhuggingface.co\u002Ftransformers\u002Findex.html)。`adapters` 与 _Transformers_ 完全兼容。\n\n要开始使用适配器，请参考以下资源：\n\n- **[Colab 笔记本教程](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Ftree\u002Fmain\u002Fnotebooks)**，这是一系列笔记本，介绍了 (adapter-)transformers 和 AdapterHub 的所有主要概念。\n- **https:\u002F\u002Fdocs.adapterhub.ml**，我们关于使用 `adapters` 训练和应用适配器的文档。\n- **https:\u002F\u002Fadapterhub.ml**，用于探索可用的预训练适配器模块并分享您自己的适配器。\n- **此仓库的 [示例文件夹](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Ftree\u002Fmain\u002Fexamples\u002Fpytorch)**，其中包含 HuggingFace 的示例训练脚本，许多已针对适配器训练进行了改编。\n\n## 已实现的方法\n\n目前，Adapters 集成了以下所有架构和方法：\n\n| 方法 | 论文 | 快速链接 |\n| --- | --- | --- |\n| 瓶颈适配器 | [Houlsby 等 (2019)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.00751.pdf)\u003Cbr> [Bapna 和 Firat (2019)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.08478.pdf)\u003Cbr> [Steitz 和 Roth (2024)](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FSteitz_Adapters_Strike_Back_CVPR_2024_paper.pdf) | [快速入门](https:\u002F\u002Fdocs.adapterhub.ml\u002Fquickstart.html), [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F01_Adapter_Training.ipynb) |\n| AdapterFusion | [Pfeiffer 等 (2021)](https:\u002F\u002Faclanthology.org\u002F2021.eacl-main.39.pdf) | [文档：训练](https:\u002F\u002Fdocs.adapterhub.ml\u002Ftraining.html#train-adapterfusion), [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F03_Adapter_Fusion.ipynb) |\n| MAD-X,\u003Cbr> 可逆适配器 | [Pfeiffer 等 (2020)](https:\u002F\u002Faclanthology.org\u002F2020.emnlp-main.617\u002F) | [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F04_Cross_Lingual_Transfer.ipynb) |\n| AdapterDrop | [Rücklé 等 (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.11918.pdf) | [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F05_Adapter_Drop_Training.ipynb) |\n| MAD-X 2.0,\u003Cbr> 嵌入训练 | [Pfeiffer 等 (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.15562.pdf) | [文档：嵌入](https:\u002F\u002Fdocs.adapterhub.ml\u002Fembeddings.html), [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F08_NER_Wikiann.ipynb) |\n| 前缀调优 | [Li 和 Liang (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.00190.pdf) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#prefix-tuning) |\n| 并行适配器,\u003Cbr> 混搭适配器 | [He 等 (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04366.pdf) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethod_combinations.html#mix-and-match-adapters) |\n| Compacter | [Mahabadi 等 (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.04647.pdf) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#compacter) |\n| LoRA | [Hu 等 (2021)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09685.pdf) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#lora) |\n| MTL-LoRA | [Yang 等, 2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.09437) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmulti_task_methods.html#mtl-lora) |\n| (IA)^3 | [Liu 等 (2022)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.05638.pdf) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#ia-3) |\n| Vera | [Kopiczko 等, 2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.11454) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#vera) |\n| DoRA | [Liu 等, 2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.09353) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#dora) |\n| UniPELT | [Mao 等 (2022)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.07577.pdf) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethod_combinations.html#unipelt) |\n| 提示调优 | [Lester 等 (2021)](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.243\u002F) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#prompt-tuning) |\n| QLoRA | [Dettmers 等 (2023)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.14314.pdf) | [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FQLoRA_Llama_Finetuning.ipynb) |\n| ReFT | [Wu 等 (2024)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.03592) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#reft) |\n| 适配器任务算术 | [Chronopoulou 等 (2023)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.09344)\u003Cbr> [Zhang 等 (2023)](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F299a08ee712d4752c890938da99a77c6-Abstract-Conference.html) | [文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmerging_adapters.html), [笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002F06_Task_Arithmetics.ipynb)|\n\n\n## 支持的模型\n\n我们目前支持文档中 **[模型概览](https:\u002F\u002Fdocs.adapterhub.ml\u002Fmodel_overview.html)** 页面上列出的所有模型的 PyTorch 版本。\n\n## 开发与贡献\n\n如需开始自行开发 _Adapters_ 并了解如何参与贡献，请参阅 https:\u002F\u002Fdocs.adapterhub.ml\u002Fcontributing.html。\n\n## 引用\n\n如果您在工作中使用了 _Adapters_，请考虑引用我们的库论文：[Adapters: 用于参数高效且模块化迁移学习的统一库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11077)\n\n```\n@inproceedings{poth-etal-2023-adapters,\n    title = \"Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning\",\n    author = {Poth, Clifton  and\n      Sterz, Hannah  and\n      Paul, Indraneil  and\n      Purkayastha, Sukannya  and\n      Engl{\\\"a}nder, Leon  and\n      Imhof, Timo  and\n      Vuli{\\'c}, Ivan  and\n      Ruder, Sebastian  and\n      Gurevych, Iryna  and\n      Pfeiffer, Jonas},\n    booktitle = \"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations\",\n    month = dec,\n    year = \"2023\",\n    address = \"Singapore\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2023.emnlp-demo.13\",\n    pages = \"149--160\",\n}\n```\n\n或者，对于其前身 `adapter-transformers`、Hub 基础设施以及由 AdapterHub 团队上传的适配器，请考虑引用我们的初始论文：[AdapterHub: 用于调整 Transformer 的框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.07779)\n\n```\n@inproceedings{pfeiffer2020AdapterHub,\n    title={AdapterHub: A Framework for Adapting Transformers},\n    author={Pfeiffer, Jonas and\n            R{\\\"u}ckl{\\'e}, Andreas and\n            Poth, Clifton and\n            Kamath, Aishwarya and\n            Vuli{\\'c}, Ivan and\n            Ruder, Sebastian and\n            Cho, Kyunghyun and\n            Gurevych, Iryna},\n    booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},\n    pages={46--54},\n    year={2020}\n}\n```","# Adapters 快速上手指南\n\n**Adapters** 是一个基于 Hugging Face Transformers 的扩展库，旨在为自然语言处理任务提供统一、高效且模块化的参数微调（Parameter-Efficient Transfer Learning）解决方案。它支持 LoRA、Prefix Tuning、AdapterFusion 等 10+ 种主流微调方法，并兼容 20+ 种最先进的 Transformer 模型。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows\n*   **Python 版本**: 3.9 或更高\n*   **PyTorch 版本**: 2.0 或更高\n*   **前置依赖**: 需先安装 PyTorch。\n\n> **国内开发者提示**：建议配置国内镜像源以加速依赖下载。\n> *   PyTorch 安装参考：[https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) (可选择国内镜像)\n> *   pip 临时使用清华源：`pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple ...`\n\n## 安装步骤\n\n您可以选择通过 PyPI 直接安装，或从源代码安装。\n\n### 方式一：通过 PyPI 安装（推荐）\n\n```bash\npip install -U adapters\n```\n\n*国内加速版：*\n```bash\npip install -U adapters -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：从源代码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters.git\ncd adapters\npip install .\n```\n\n## 基本使用\n\nAdapters 的设计初衷是极简集成。以下是两种最常用的场景：加载预训练 Adapter 和对现有模型添加 Adapter。\n\n### 场景 1：加载并使用预训练 Adapter\n\n直接加载 Hugging Face Hub 上已有的适配器模型进行推理。\n\n```python\nfrom adapters import AutoAdapterModel\nfrom transformers import AutoTokenizer\n\n# 加载基础模型和分词器\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\n\n# 加载预训练适配器 (例如：IMDB 情感分析)\n# source=\"hf\" 表示从 Hugging Face Hub 加载\nmodel.load_adapter(\"AdapterHub\u002Froberta-base-pf-imdb\", source=\"hf\", set_active=True)\n\n# 执行推理\ninputs = tokenizer(\"This works great!\", return_tensors=\"pt\")\noutputs = model(**inputs)\nprint(outputs.logits)\n```\n\n### 场景 2：为现有模型添加并训练 Adapter\n\n将任意支持的 Transformers 模型转换为可训练 Adapter 的模式，并以 LoRA 为例进行配置。\n\n```python\nimport adapters\nfrom transformers import AutoModelForSequenceClassification\n\n# 加载标准 Transformers 模型\nmodel = AutoModelForSequenceClassification.from_pretrained(\"t5-base\")\n\n# 初始化 adapters 支持\nadapters.init(model)\n\n# 添加一个新的 adapter，指定方法为 \"lora\"\nmodel.add_adapter(\"my_lora_adapter\", config=\"lora\")\n\n# 设置该 adapter 为训练状态（冻结其他参数，仅训练 adapter 部分）\nmodel.train_adapter(\"my_lora_adapter\")\n\n# 接下来即可使用您常规的训练循环进行训练\n# for batch in dataloader:\n#     outputs = model(**batch)\n#     loss = outputs.loss\n#     loss.backward()\n#     ...\n```\n\n### 进阶：灵活组合多个 Adapter\n\nAdapters 支持通过简单的上下文管理器组合多个适配器（例如并行执行不同任务的适配器）。\n\n```python\nfrom adapters import AdapterSetup, AutoAdapterModel\nimport adapters.composition as ac\nfrom transformers import AutoTokenizer\n\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\n\n# 加载两个不同的适配器\nqc_adapter = model.load_adapter(\"AdapterHub\u002Froberta-base-pf-trec\")\nsent_adapter = model.load_adapter(\"AdapterHub\u002Froberta-base-pf-imdb\")\n\n# 使用 Parallel 块并行运行这两个适配器\nwith AdapterSetup(ac.Parallel(qc_adapter, sent_adapter)):\n    inputs = tokenizer(\"What is AdapterHub?\", return_tensors=\"pt\")\n    outputs = model(**inputs)\n    print(outputs)\n```\n\n更多详细文档、Colab 教程及支持的模型列表，请访问 [官方文档](https:\u002F\u002Fdocs.adapterhub.ml)。","某电商初创公司的算法团队需要在有限的 GPU 资源下，快速为多语言客服机器人部署针对“退货政策”、“物流查询”和“产品推荐”三个不同意图的专用模型。\n\n### 没有 adapters 时\n- **显存爆炸**：每微调一个新任务需加载并训练完整的百亿参数模型，单卡显存瞬间溢出，被迫购买昂贵的高端显卡集群。\n- **存储冗余**：每个任务独立保存一份全量模型权重，磁盘空间被大量重复的基础参数占用，管理成本极高。\n- **开发繁琐**：切换不同微调方法（如 LoRA 或 Prefix Tuning）需重写底层代码，难以在 T5、RoBERTa 等不同架构间复用逻辑。\n- **部署沉重**：上线时需为每个意图部署独立的大型服务实例，导致推理延迟高且运维复杂。\n\n### 使用 adapters 后\n- **极致省显存**：利用 adapters 集成的 Q-LoRA 等技术，仅训练少量插入式参数，单张消费级显卡即可并行训练多个任务。\n- **模块化存储**：只需保存几 MB 的适配器文件而非数十 GB 的全量模型，轻松在同一基座模型上挂载不同任务模块。\n- **统一接口**：通过一行代码即可在 20+ 种主流模型上灵活切换或组合多种适配策略，无需关心底层架构差异。\n- **动态组合**：支持通过“任务算术”合并多个适配器，实现单个模型实例动态处理复合意图，大幅降低推理成本。\n\nadapters 通过参数高效微调技术，让中小团队也能以极低的算力成本，实现大模型在多任务场景下的敏捷迭代与低成本部署。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fadapter-hub_adapters_19e23d78.png","adapter-hub","AdapterHub","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fadapter-hub_f5c19b9b.png","",null,"https:\u002F\u002FAdapterHub.ml","https:\u002F\u002Fgithub.com\u002Fadapter-hub",[81,85,89,93],{"name":82,"color":83,"percentage":84},"Python","#3572A5",64.4,{"name":86,"color":87,"percentage":88},"Jupyter Notebook","#DA5B0B",35.5,{"name":90,"color":91,"percentage":92},"Makefile","#427819",0.1,{"name":94,"color":95,"percentage":92},"Shell","#89e051",2808,373,"2026-04-08T03:03:41","Apache-2.0","未说明（支持量化训练如 Q-LoRA，暗示可选 GPU 加速）","未说明",{"notes":103,"python":104,"dependencies":105},"该工具是 HuggingFace Transformers 的附加库，需先安装 PyTorch 2.0+。支持多种参数高效微调方法（如 LoRA, QLoRA, Prefix Tuning 等）及模型合并功能。兼容之前 adapter-transformers 包训练的适配器。","3.9+",[106,107],"torch>=2.0","transformers",[14,35],[110,111,64,107,112,113,114,115,116],"nlp","natural-language-processing","bert","pytorch","parameter-efficient-learning","parameter-efficient-tuning","lora","2026-03-27T02:49:30.150509","2026-04-09T09:31:17.349723",[120,125,130,135,140,144],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},25850,"T5 和 mT5 模型是否支持 Adapter？","目前 T5 模型的支持正在开发中（如添加分类头等功能可能尚未完善），而 mT5 模型暂时还不支持。建议关注官方更新以获取最新支持状态。","https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fissues\u002F127",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},25851,"如何在训练时并行运行多个 Adapter 且不让它们互相干扰，同时节省内存？","可以使用新增的 `MultiHeadOutput` 类，它允许将多个并行 Adapter 的独立损失相加，从而一起进行反向传播。这样可以在不重复加载原始 BERT 参数的情况下，让多个 Adapter 输出独立的表示。此外，该功能已通过单元测试，支持对多个 Adapter 同时进行反向传播。","https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fissues\u002F223",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},25852,"训练 MAD-X 论文中的语言适配器时遇到内存不足（OOM）错误怎么办？","即使只有单张 GPU（甚至 12GB 显存），也可以通过使用梯度累积（gradient accumulation）来训练大规模数据集（如 Wikipedia）。无需一次性将所有数据载入内存，利用梯度累积可以有效缓解内存压力。","https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fissues\u002F125",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},25853,"加载预训练模型后再次加载 Adapter 时出现“没有激活任何 Adapter”的警告，该如何解决？","该警告通常出现在使用 `from_pretrained()` 加载模型后，又调用 `load_adapter()` 时。虽然会出现警告，但只要确保在 `load_adapter()` 中设置了 `set_active=True`，Adapter 实际上已成功加载并激活。例如：`model.load_adapter(\"adapter_name\", source=\"hf\", set_active=True)`。可以忽略该警告，或通过检查 `model.active_adapters` 确认激活状态。","https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fissues\u002F342",{"id":141,"question_zh":142,"answer_zh":143,"source_url":139},25854,"如何在使用自定义 PyTorch 训练循环时为 BERT 模型添加并训练 Adapter？","首先使用 `BertAdapterModel.from_pretrained()` 加载模型，然后调用 `add_adapter()` 添加新 Adapter，并使用 `train_adapter()` 指定要训练的 Adapter 名称。注意冻结 BERT 主干参数（设置 `requires_grad=False`），只保留 Adapter 和自定义分类头的参数可训练。前向传播时，确保输入包含 `input_ids` 和 `attention_mask`，并将 BERT 输出传入自定义分类头。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},25855,"导入 transformers 时出现 'AutoModelWithHeads' 无法导入的错误怎么办？","该错误通常是因为使用了不兼容的 `transformers` 版本。Adapter-Hub 依赖于其 fork 版本 `adapter-transformers`，而非官方的 `transformers`。请确保安装的是 `adapter-transformers` 包（如 `pip install adapter-transformers`），并避免直接从 `transformers` 导入 `AutoModelWithHeads`。相关讨论已合并到其他 Issue 中，建议参考最新文档确认正确导入方式。","https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fissues\u002F91",[150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245],{"id":151,"version":152,"summary_zh":153,"released_at":154},163153,"v1.2.0","**博客文章：https:\u002F\u002Fadapterhub.ml\u002Fblog\u002F2025\u002F05\u002Fadapters-for-any-transformer**\n\n本版本专为 Hugging Face Transformers **v4.51.x** 构建。\n\n## 新特性\n### Adapter 模型插件接口（@calpt 通过 #738；@lenglaender 通过 #797）\n新的 Adapter 模型接口使得大多数 Adapter 功能能够轻松地集成到任何新模型或自定义 Transformer 模型中。详情请参阅[我们的发布博客文章](https:\u002F\u002Fadapterhub.ml\u002Fblog\u002F2025\u002F05\u002Fadapters-for-any-model-huggingface-hub)。更多信息也可访问：https:\u002F\u002Fdocs.adapterhub.ml\u002Fplugin_interface.html。\n\n### 基于 MTL-LoRA 的多任务组合（@FrLdy 通过 #792）\nMTL-LoRA（[Yang 等，2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2410.09437)）是一种利用 LoRA 技术进行多任务学习的新型 Adapter 组合方法。详情请参阅：https:\u002F\u002Fdocs.adapterhub.ml\u002Fmulti_task_methods.html#mtl-lora。\n\n### VeRA——一种参数高效的 LoRA 变体（@julian-fong 通过 #763）\nVeRA（[Kopiczko 等，2024](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.11454)）是 LoRA Adapter 的一种变体，所需的可训练参数更少。详情请参阅：https:\u002F\u002Fdocs.adapterhub.ml\u002Fmethods.html#vera。\n\n### 新增模型（通过新接口）\n借助新的 Adapter 模型插件接口，以下几款新模型已获得开箱即用的支持：\n- Gemma 2、Gemma 3\n- ModernBERT\n- Phi 1、Phi 2\n- Qwen 2、Qwen 2.5、Qwen 3\n\n### 其他更新\n- 新增 `init_weights_seed` Adapter 配置属性，用于以相同权重初始化 Adapter（@TimoImhof 通过 #786）\n- 支持通过 `ForwardContext` 定义自定义前向传播方法参数（@calpt 通过 #789）\n\n## 变更\n- 升级支持的 Transformers 版本（@calpt 通过 #799；@TimoImhof 通过 #805）","2025-05-20T19:43:51",{"id":156,"version":157,"summary_zh":158,"released_at":159},163154,"v1.1.1","本版本专为 Hugging Face Transformers **v4.48.x** 构建。\n\n## 变更\n- 将支持的 Transformers 版本升级至 4.48（@TimoImhof，通过 #782）\n\n## 修复\n- 修复 PrefixTuning 的 BatchSplit 实现（@FrLdy，通过 #750）","2025-04-12T21:00:06",{"id":161,"version":162,"summary_zh":163,"released_at":164},163155,"v1.1.0","此版本专为 Hugging Face Transformers **v4.47.x** 构建。\n\n## 新增功能\n#### 添加 AdapterPlus 适配器（@julian-fong，通过 #746、#775）：\nAdapterPlus（Steitz & Roth, 2024，[论文链接](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2024\u002Fpapers\u002FSteitz_Adapters_Strike_Back_CVPR_2024_paper.pdf)）是一种针对视觉 Transformer 优化的新型瓶颈适配器变体。请查看我们的 **[笔记本](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FViT_AdapterPlus_FineTuning.ipynb)**，了解如何为 ViT 模型训练 AdapterPlus 适配器。\n#### 轻松保存、加载并将完整的适配器组合推送到 Hub（@calpt，通过 #771）：\n新添加的 `save_adapter_setup()`、`load_adapter_setup()` 和 `push_adapter_setup_to_hub()` 方法，只需一行代码即可保存、加载和上传复杂的适配器组合，包括 AdapterFusion 设置。更多详情请参阅我们的 **[文档](https:\u002F\u002Fdocs.adapterhub.ml\u002Floading.html#saving-and-loading-adapter-compositions)**。\n#### 支持使用适配器实现完整的梯度检查点功能（@lenglaender，通过 #759）：\n梯度检查点是一种在内存非常有限的情况下进行微调的技术，与高效的适配器相辅相成。现在，所有集成的适配器方法都支持该功能。请查看我们的 **[笔记本](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FGradient_Checkpointing_Llama.ipynb)**，了解如何结合梯度检查点和适配器对 Llama 模型进行微调。\n#### 其他改进\n- 为 AdapterFusion 层自定义名称（@calpt，通过 #774）：\n允许通过名称区分同一适配器上的多个融合层。详情请参阅 **[此处](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fpull\u002F774#issue-2766103040)**。\n- 允许在 AdapterConfig 中指定适配器的数据类型（@killershrimp，通过 #767）。\n\n## 变更\n- **对库测试套件进行重大重构**（@TimoImhof，通过 #740）：\n详细信息请参阅 [测试 README](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fblob\u002Fmain\u002Ftests\u002FREADME.md)。\n- 升级支持的 Transformers 版本（@calpt，通过 #757；@lenglaender，通过 #776）。\n\n## 修复\n- 修复 Mistral 使用 Flash Attention 时与适配器的兼容性问题（@divyanshuaggarwal，通过 #758）。\n- 修复瓶颈配置在 `ln_before = True` 且 `init_weights = \"mam_adapter\"` 时无法正常工作的问题（@julian-fong，通过 #761）。\n- 修复前向传播中 LoRA\u002F(IA)^3 的默认缩放逻辑（@calpt，通过 #770）：**注意**：此举恢复了与 adapter-transformers 的兼容性，但改变了与先前 Adapters 版本相比的逻辑。详情请参阅 #770。\n- 修复 ReFT 在序列生成及正交投影方面的若干问题（@calpt，通过 #778）。\n- 各种细微的兼容性和警告修复（@calpt，通过 #780、#787）。","2025-01-28T20:27:27",{"id":166,"version":167,"summary_zh":168,"released_at":169},163156,"v1.0.1","本版本专为 Hugging Face Transformers **v4.45.x** 构建。\n\n## 新增\n- 添加 ReFT 训练示例笔记本 [notebook](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FReFT_Adapters_Finetuning.ipynb) (@julian-fong 通过 #741)\n\n## 变更\n- 升级支持的 Transformers 版本 (@calpt 通过 #751)\n\n## 修复\n- 修复 Huggingface-hub 版本 ≥0.26.0 的导入错误，并更新笔记本 (@lenglaender 通过 #750)\n- 修复空激活头的 output_embeddings 获取\u002F设置问题 (@calpt 通过 #754)\n- 修复 LoRA 并行组合问题 (@calpt 通过 #752)","2024-11-02T18:47:18",{"id":171,"version":172,"summary_zh":173,"released_at":174},163157,"v1.0.0","**博客文章：https:\u002F\u002Fadapterhub.ml\u002Fblog\u002F2024\u002F08\u002Fadapters-update-reft-qlora-merging-models**\n\n本版本基于 Hugging Face Transformers **v4.43.x** 构建。\n\n## 新增适配器方法与模型支持\n- 增加 **[表示微调 (ReFT)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.03592)** 实现（LoReFT、NoReFT、DiReFT）(@calpt 通过 #705)\n- 增加 LoRA 权重与 **任务算术** 的合并功能 (@lenglaender 通过 #698)\n- 增加 **Whisper** 模型支持及相应笔记本(@TimoImhof 通过 #693；@julian-fong 通过 #717)\n- 增加 **Mistral** 模型支持 (@KorventennFR 通过 #609)\n- 增加 **PLBart** 模型支持 (@FahadEbrahim 通过 #709)\n\n## 破坏性变更与弃用内容\n- 移除从归档 Hub 仓库加载的支持 (@calpt 通过 #724)\n- 移除已弃用的 add_fusion() 和 train_fusion() 方法 (@calpt 通过 #714)\n- 移除 `push_adapter_to_hub()` 方法中已弃用的参数 (@calpt 通过 #724)\n- 弃用向适配器激活传递 Python 列表的功能 (@calpt 通过 #714)\n\n## 小修复与改动\n- 升级支持的 Transformers 版本 (@calpt 和 @lenglaender 通过 #712、#719、#727)\n- 修复 Llama 模型对 SDPA\u002FFlash Attention 的支持 (@calpt 通过 #722)\n- 修复 Llama 模型以及瓶颈适配器的梯度检查点功能 (@calpt 通过 #730)","2024-08-10T15:47:41",{"id":176,"version":177,"summary_zh":178,"released_at":179},163158,"v0.2.2","本版本专为 Hugging Face Transformers **v4.40.x** 构建。\n\n## 新增\n- 添加嵌入训练示例笔记本，并更新文档（@hSterz，通过 #706）\n\n## 变更\n- 升级支持的 Transformers 版本（@calpt，通过 #697）\n- 为 AH Adapter 添加到 Hugging Face 的下载重定向（@calpt，通过 #704）\n\n## 修复\n- 修复保存带有自定义头部的 Adapter 模型的问题（@hSterz，通过 #700）\n- 修复使用 `adapter_to()` 将 Adapter 头部移动到指定设备的问题（@calpt，通过 #708）\n- 修复导入编码器-解码器 Adapter 类的问题（@calpt，通过 #711）","2024-06-27T20:23:39",{"id":181,"version":182,"summary_zh":183,"released_at":184},163159,"v0.2.1","本版本专为 Hugging Face Transformers **v4.39.x** 构建。\n\n## 新增\n- 支持通过 Safetensors 保存和加载模型，并新增 `use_safetensors` 参数 (@calpt 通过 #692)\n- 添加 `adapter_to()` 方法，用于移动和转换适配器权重 (@calpt 通过 https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fpull\u002F699)\n\n## 修复\n- 修复 HF 中 `get_adapter_info()` 读取模型信息的问题 (@calpt 通过 #695)\n- 修复使用量化训练时 AdapterTrainer 的 `load_best_model_at_end=True` 参数问题 (@calpt 通过 #699)","2024-05-21T19:16:35",{"id":186,"version":187,"summary_zh":188,"released_at":189},163160,"v0.2.0","本版本专为 Hugging Face Transformers **v4.39.x** 构建。\n\n## 新增\n- 通过 bitsandbytes 添加对 QLoRA\u002FQAdapter 训练的支持（@calpt，via #663）：**[Notebook 教程](https:\u002F\u002Fgithub.com\u002FAdapter-Hub\u002Fadapters\u002Fblob\u002Fmain\u002Fnotebooks\u002FQLoRA_Llama_Finetuning.ipynb)**\n- 在瓶颈适配器中添加 Dropout 层（@calpt，via #667）\n\n## 变更\n- 升级支持的 Transformers 版本（@lenglaender，via #654；@calpt，via #686）\n- 在文档中弃用 Hub 仓库相关内容（@calpt，via #668）\n- 如果 load_adapter() 中未指定来源，则更改解析顺序（@calpt，via #681）\n\n## 修复\n- 修复使用适配器时的 DataParallel 训练问题（@calpt，via #658）\n- 修复嵌入层训练中的 Bug（@hSterz，via #655）\n- 修复 Prefix Tuning 的 fp16\u002Fbf16 支持问题（@calpt，via #659）\n- 修复 AdapterDrop 与 Prefix Tuning 混合使用时的训练错误（@TimoImhof，via #673）\n- 修复从 AH 仓库加载适配器时的默认缓存路径问题（@calpt，via #676）\n- 修复在不适用层中跳过组合块的问题（@calpt，via #665）\n- 修复 Unipelt Lora 的默认配置（@calpt，via #682）\n- 修复适配器与 HF Accelerate 自动设备映射的兼容性问题（@calpt，via #678）\n- 如果模型未提供头层 Dropout 概率，则使用默认值（@calpt，via #685）","2024-04-25T13:54:25",{"id":191,"version":192,"summary_zh":193,"released_at":194},163161,"v0.1.2","本版本针对 Hugging Face Transformers **v4.36.x** 构建。\n\n## 新增\n- 添加对 MT5 的支持（@sotwi 通过 #629）\n\n## 变更\n- 升级支持的 Transformers 版本（@calpt 通过 #617）\n- 简化 XAdapterModel 的实现（@calpt 通过 #641）\n\n## 修复\n- 修复 T5 的预测头加载问题（@calpt 通过 #640）","2024-02-28T21:47:30",{"id":196,"version":197,"summary_zh":198,"released_at":199},163162,"v0.1.1","本版本基于 Hugging Face Transformers **v4.35.x** 构建。\n\n## 新增\n- 为 LoRA 和 (IA)³ 添加 `leave_out` 参数 (@calpt，通过 #608)\n\n## 修复\n- 修复 `push_adapter_to_hub()` 中因已弃用参数导致的错误 (@calpt，通过 https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fpull\u002F613)\n- 修复 T5 模型中当 `d_kv != d_model \u002F num_heads` 时 Prefix-Tuning 的问题 (@calpt，通过 https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fpull\u002F621)\n- [Bart] 将 CLS 表示的提取从 EOS 标记移至头部类 (@calpt，通过 https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fpull\u002F624)\n- 修复使用 `skip_layers` 或 AdapterDrop 训练时适配器激活的问题 (@calpt，通过 #634)\n\n## 文档与笔记本\n- 更新笔记本，并新增复杂配置演示笔记本 (@hSterz 和 @calpt，通过 https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Fpull\u002F614)","2024-01-09T21:16:07",{"id":201,"version":202,"summary_zh":203,"released_at":204},163163,"v0.1.0","**Blog post: https:\u002F\u002Fadapterhub.ml\u002Fblog\u002F2023\u002F11\u002Fintroducing-adapters\u002F**\r\n\r\nWith the new _Adapters_ library, we fundamentally refactored the adapter-transformers library and added support for new models and adapter methods.\r\n\r\nThis version is compatible with Hugging Face Transformers version 4.35.2.\r\n\r\nFor a guide on how to migrate from adapter-transformers to _Adapters_ have a look at https:\u002F\u002Fdocs.adapterhub.ml\u002Ftransitioning.md.\r\nChanges are given compared to the latest [adapters-transformers v3.2.1](https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapters\u002Freleases\u002Ftag\u002Fadapters3.2.1).\r\n\r\n# New Models & Adapter Methods\r\n- Add LLaMA model integration (@hSterz)\r\n- Add X-MOD model integration (@calpt via #581)\r\n- Add Electra model integration (@hSterz via #583, based on work of @amitkumarj441 and @pauli31 in #400)\r\n- Add adapter output & parameter averaging (@calpt)\r\n- Add Prompt Tuning (@lenglaender and @calpt via #595)\r\n- Add Composition Support to LoRA and (IA)³ (@calpt via #598)\r\n\r\n# Breaking Changes\r\n- Renamed bottleneck adapter configs and config strings. The new names can be found here: https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html (@calpt)\r\n- Removed the XModelWithHeads classes (@lenglaender) _(XModelWithHeads have been deprecated since adapter-transformers version 3.0.0)_\r\n\r\n# Changes Due to the Refactoring\r\n- Refactored the implementation of all already supported models (@calpt, @lenglaender, @hSterz, @TimoImhof)\r\n- Separate the model config (`PretrainedConfig`) from the adapters config (`ModelAdaptersConfig`) (@calpt)\r\n- Updated the whole documentation, Jupyter Notebooks and example scripts (@hSterz, @lenglaender, @TimoImhof, @calpt)\r\n- Introduced the `load_model` function to load models containing adapters. This replaces the Hugging Face `from_pretrained` function used in the `adapter-transformers` library (@lenglaender)\r\n- Sharing more logic for adapter composition between different composition blocks (@calpt via #591)\r\n- Added Backwards Compatibility Tests which allow for testing if adaptations of the codebase, such as Refactoring, impair the functionality of the library  (@TimoImhof via #596)\r\n- Refactored the EncoderDecoderModel by introducing a new mixin (`ModelUsingSubmodelsAdaptersMixin`) for models that contain other models (@lenglaender)\r\n- Rename the class `AdapterConfigBase` into `AdapterConfig` (@hSterz via #603)\r\n\r\n# Fixes and Minor Improvements\r\n- Fixed EncoderDecoderModel generate function (@lenglaender)\r\n- Fixed deletion of invertible adapters (@TimoImhof)\r\n- Automatically convert heads when loading with XAdapterModel (@calpt via #594)\r\n- Fix training T5 adapter models with Trainer (@calpt via #599)\r\n- Ensure output embeddings are frozen during adapter training (@calpt #537)\r\n","2023-11-24T10:10:50",{"id":206,"version":207,"summary_zh":208,"released_at":209},163164,"adapters3.2.1","This is the last release of `adapter-transformers`. See here for the legacy codebase: https:\u002F\u002Fgithub.com\u002Fadapter-hub\u002Fadapter-transformers-legacy.\r\n\r\n**Based on transformers v4.26.1**\r\n\r\n## Fixed\r\n- Fix compacter init weights (@hSterz via #516)\r\n- Restore compatibility of GPT-2 weight initialization with Transformers (@calpt via #525)\r\n- Restore Python 3.7 compatibility (@lenglaender via #510)\r\n- Fix LoRA & (IA)³ implementation for Bart & MBart (@calpt via #518)\r\n- Fix `resume_from_checkpoint` in `AdapterTrainer` class (@hSterz via #514)\r\n  ","2023-04-06T21:17:41",{"id":211,"version":212,"summary_zh":213,"released_at":214},163165,"adapters3.2.0","**Based on transformers v4.26.1**\r\n\r\n## New\r\n### New model integrations\r\n- Add BEiT integration (@jannik-brinkmann via #428, #439)\r\n- Add GPT-J integration (@ChiragBSavani via #426)\r\n- Add CLIP integration (@calpt via #483)\r\n- Add ALBERT integration (@lenglaender via #488)\r\n- Add BertGeneration (@hSterz via #480)\r\n\r\n### Misc\r\n- Add support for adapter configuration strings (@calpt via #465, #486)\r\n  This enables you to easily configure adapter configs. To create a Pfeiffer adapter with reduction factor 16 you can know use `pfeiffer[reduction_factor=16]`. Especially for experiments using different hyperparameters or the example scripts, this can come in handy. [Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html#configuration-strings)\r\n- Add for `Stack`, `Parallel` & `BatchSplit` composition to prefix tuning (@calpt via #476)\r\n  In previous `adapter-transformers` versions, you could combine multiple bottleneck adapters. You could use them in parallel or stack them. Now, this is also possible for prefix-tuning adapters. Add multiple prefixes to the same model to combine the functionality of multiple adapters (Stack) or perform several tasks simultaneously (Parallel, BatchSplit) [Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html#stack)\r\n- Enable parallel sequence generation with adapters (@calpt via #436)\r\n\r\n## Changed\r\n- Removal of the `MultiLingAdapterArguments` class. Use the [`AdapterArguments`](https:\u002F\u002Fdocs.adapterhub.ml\u002Fclasses\u002Fadapter_training.html#transformers.adapters.training.setup_adapter_training) class and [`setup_adapter_training`](https:\u002F\u002Fdocs.adapterhub.ml\u002Fclasses\u002Fadapter_training.html#transformers.adapters.training.setup_adapter_training) method instead. [Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Ftraining.html).\r\n- Upgrade of underlying transformers version to 4.26.1 (@calpt via #455, @hSterz via #503)\r\n\r\n## Fixed\r\n- Fixes for GLUE & dependency parsing example script (@calpt via #430, #454)\r\n- Fix access to shared parameters of compacter (e.g. during sequence generation) (@calpt via #440)\r\n- Fix reference to adapter configs in `T5EncoderModel` (@calpt via #437)\r\n- Fix DeBERTa prefix tuning with enabled relative attention (@calpt via #451)\r\n- Fix gating for prefix tuning layers (@calpt via #471)\r\n- Fix input to T5 adapter layers (@calpt via #479)\r\n- Fix AdapterTrainer hyperparameter tuning (@dtuit via #482)\r\n- Move loading best adapter to AdapterTrainer class (@MaBeHen via #487)\r\n- Make HuggingFace Hub Mixin work with newer utilities (@Helw150 via #473)\r\n- Only compute fusion reg loss if fusion layer is trained (@calpt via #505)","2023-03-03T14:08:47",{"id":216,"version":217,"summary_zh":218,"released_at":219},163166,"adapters3.1.0","**Based on transformers v4.21.3**\r\n\r\n## New\r\n### New adapter methods\r\n- Add LoRA implementation (@calpt via #334, #399): **[Documentation](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html#lora)**\r\n- Add (IA)^3 implementation (@calpt via #396): **[Documentation](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html#ia-3)**\r\n- Add UniPELT implementation (@calpt via #407): **[Documentation](https:\u002F\u002Fdocs.adapterhub.ml\u002Foverview.html#unipelt)**\r\n\r\n### New model integrations\r\n- Add `Deberta` and `DebertaV2` integration(@hSterz via #340)\r\n- Add Vision Transformer integration (@calpt via #363)\r\n\r\n### Misc\r\n- Add `adapter_summary()` method (@calpt via #371): **[More info](https:\u002F\u002Fadapterhub.ml\u002Fblog\u002F2022\u002F09\u002Fupdates-in-adapter-transformers-v3-1\u002F#adapter_summary-method)**\r\n- Return AdapterFusion attentions using `output_adapter_fusion_attentions` argument (@calpt via #417): **[Documentation](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html#retrieving-adapterfusion-attentions)**\r\n\r\n## Changed\r\n- Upgrade of underlying transformers version (@calpt via #344, #368, #404)\r\n\r\n## Fixed\r\n- Infer label names for training for flex head models (@calpt via #367)\r\n- Ensure root dir exists when saving all adapters\u002Fheads\u002Ffusions (@calpt via #375)\r\n- Avoid attempting to set prediction head if non-existent (@calpt via #377)\r\n- Fix T5EncoderModel adapter integration (@calpt via #376)\r\n- Fix loading adapters together with full model (@calpt via #378)\r\n- Multi-gpu support for prefix-tuning (@alexanderhanboli via #359)\r\n- Fix issues with embedding training (@calpt via #386)\r\n- Fix initialization of added embeddings (@calpt via #402)\r\n- Fix model serialization using `torch.save()` & `torch.load()` (@calpt via #406)","2022-09-15T09:39:42",{"id":221,"version":222,"summary_zh":223,"released_at":224},163167,"adapters3.0.1","**Based on transformers v4.17.0**\r\n\r\n## New\r\n- Support float reduction factors in bottleneck adapter configs (@calpt via #339)\r\n\r\n## Fixed\r\n- [AdapterTrainer] add missing `preprocess_logits_for_metrics` argument (@stefan-it via #317)\r\n- Fix save_all_adapters such that with_head is not ignored (@hSterz via #325)\r\n- Fix inferring batch size for prefix tuning (@calpt via #335)\r\n- Fix bug when using compacters with AdapterSetup context (@calpt via #328)\r\n- [Trainer] Fix issue with AdapterFusion and `load_best_model_at_end` (@calpt via #341)\r\n- Fix generation with GPT-2, T5 and Prefix Tuning (@calpt via #343)\r\n\r\n","2022-05-18T08:48:09",{"id":226,"version":227,"summary_zh":228,"released_at":229},163168,"adapters3.0.0","**Based on transformers v4.17.0**\r\n\r\n## New\r\n### Efficient Fine-Tuning Methods\r\n- Add Prefix Tuning (@calpt via  #292)\r\n- Add Parallel adapters & Mix-and-Match adapter (@calpt via #292)\r\n- Add Compacter (@hSterz via #297)\r\n\r\n### Misc\r\n- Introduce `XAdapterModel` classes as central & recommended model classes (@calpt via #289)\r\n- Introduce `ConfigUnion` class for flexible combination of adapter configs (@calpt via #292)\r\n- Add `AdapterSetup` context manager to replace `adapter_names` parameter (@calpt via #257)\r\n- Add `ForwardContext` to wrap model forward pass with adapters (@calpt via #267, #295)\r\n- Search all remote sources when passing `source=None` (new default) to `load_adapter()` (@calpt via #309)\r\n\r\n## Changed\r\n- Deprecate `XModelWithHeads` in favor of `XAdapterModel` (@calpt via #289)\r\n- Refactored adapter integration into model classes and model configs (@calpt via #263, #304)\r\n- Rename activation functions to match Transformers' names (@hSterz via #298)\r\n- Upgrade of underlying transformers version (@calpt via #311)\r\n\r\n## Fixed\r\n- Fix seq2seq generation with flexible heads classes (@calpt via #275, @hSterz via #285)\r\n- `Parallel` composition for XLM-Roberta (@calpt via #305)\r\n","2022-03-23T19:47:37",{"id":231,"version":232,"summary_zh":233,"released_at":234},163169,"adapters2.3.0","**Based on transformers v4.12.5**\r\n\r\n## New\r\n- Allow adding, loading & training of model embeddings (@hSterz via #245). See https:\u002F\u002Fdocs.adapterhub.ml\u002Fembeddings.html.\r\n\r\n## Changed\r\n- Unify built-in & custom head implementation (@hSterz via #252)\r\n- Upgrade of underlying transformers version (@calpt via #255)\r\n\r\n## Fixed\r\n- Fix documentation and consistency issues for AdapterFusion methods (@calpt via #259)\r\n-  Fix serialization\u002F deserialization issues with custom adapter config classes (@calpt via #253)\r\n","2022-02-09T17:49:43",{"id":236,"version":237,"summary_zh":238,"released_at":239},163170,"adapters2.2.0","**Based on transformers v4.11.3**\r\n\r\n## New\r\n### Model support\r\n- `T5` adapter implementation (@AmirAktify & @hSterz via #182)\r\n- `EncoderDecoderModel` adapter implementation (@calpt via #222)\r\n\r\n### Prediction heads\r\n- `AutoModelWithHeads` prediction heads for language modeling (@calpt via #210)\r\n- `AutoModelWithHeads` prediction head & training example for dependency parsing (@calpt via #208)\r\n\r\n### Training\r\n- Add a new `AdapterTrainer` for training adapters (@hSterz via #218, #241 )\r\n- Enable training of `Parallel` block (@hSterz via #226)\r\n\r\n### Misc\r\n- Add get_adapter_info() method (@calpt via #220)\r\n- Add set_active argument to add & load adapter\u002Ffusion\u002Fhead methods (@calpt via #214)\r\n- Minor improvements for adapter card creation for HF Hub upload (@calpt via #225)\r\n\r\n## Changed\r\n- Upgrade of underlying transformers version (@calpt via #232, #234, #239 )\r\n- Allow multiple AdapterFusion configs per model; remove `set_adapter_fusion_config()` (@calpt via #216)\r\n\r\n## Fixed\r\n- Incorrect referencing between adapter layer and layer norm for `DataParallel` (@calpt via #228)\r\n","2021-10-14T10:11:52",{"id":241,"version":242,"summary_zh":243,"released_at":244},163171,"adapters2.1.0","**Based on transformers v4.8.2**\r\n\r\n## New\r\n### Integration into HuggingFace's Model Hub\r\n- Add support for loading adapters from HuggingFace Model Hub (@calpt via #162) \r\n- Add method to push adapters to HuggingFace Model Hub (@calpt via #197)\r\n- **[Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Fhuggingface_hub.html)**\r\n\r\n### `BatchSplit` adapter composition\r\n- `BatchSplit` composition block for adapters and heads (@hSterz via #177)\r\n- **[Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Fadapter_composition.html#batchsplit)**\r\n\r\n### Various new features\r\n- Add automatic conversion of static heads when loaded via XModelWithHeads (@calpt via #181)\r\n  **[Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Fprediction_heads.html#automatic-conversion)**\r\n- Add `list_adapters()` method to search for adapters (@calpt via #193)\r\n  **[Learn more](https:\u002F\u002Fdocs.adapterhub.ml\u002Floading.html#finding-pre-trained-adapters)**\r\n- Add delete_adapter(), delete_adapter_fusion() and delete_head() methods (@calpt via #189)\r\n- MAD-X 2.0 WikiAnn NER notebook (@hSterz via #187)\r\n- Upgrade of underlying transformers version (@hSterz via #183, @calpt via #194 & #200)\r\n\r\n## Changed\r\n- Deprecate add_fusion() and train_fusion() in favor of add_adapter_fusion() and train_adapter_fusion() (@calpt via #190)\r\n\r\n## Fixed\r\n-  Suppress no-adapter warning when adapter_names is given (@calpt via #186)\r\n- `leave_out` in `load_adapter()` when loading language adapters from Hub (@hSterz via #177)","2021-07-08T14:20:36",{"id":246,"version":247,"summary_zh":248,"released_at":249},163172,"adapters2.0.1","**Based on transformers v4.5.1**\r\n\r\n## New\r\n- Allow different reduction factors for different adapter layers (@hSterz via #161)\r\n- Allow dynamic dropping of adapter layers in `load_adapter()` (@calpt via #172)\r\n- Add method `get_adapter()` to retrieve weights of an adapter (@hSterz via #169)\r\n\r\n## Changed\r\n- Re-add adapter_names argument to model `forward()` methods (@calpt via #176)\r\n\r\n## Fixed\r\n- Fix resolving of adapter from Hub when multiple options available (@Aaronsom via #164)\r\n- Fix & improve adapter saving\u002F loading using Trainer class (@calpt via #178)\r\n\r\n","2021-05-28T18:53:48"]