[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-synbol--Awesome-Parameter-Efficient-Transfer-Learning":3,"tool-synbol--Awesome-Parameter-Efficient-Transfer-Learning":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":78,"stars":81,"forks":82,"last_commit_at":83,"license":78,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":91,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":100,"updated_at":101,"faqs":102,"releases":103},5492,"synbol\u002FAwesome-Parameter-Efficient-Transfer-Learning","Awesome-Parameter-Efficient-Transfer-Learning","Collection of awesome parameter-efficient fine-tuning resources. ","Awesome-Parameter-Efficient-Transfer-Learning 是一个专注于“参数高效迁移学习”的优质资源合集。在大型人工智能模型日益庞大的今天，传统的全量微调方法往往需要巨大的计算资源和显存，这让许多尝试者望而却步。该项目正是为了解决这一痛点而生，它汇集了前沿的论文、代码实现和技术教程，旨在帮助用户仅通过更新极少量的模型参数，就能让强大的预训练模型适应特定的下游任务。\n\n这里不仅系统梳理了以 Adapter Tuning（适配器微调）为代表的各类主流技术路线，还持续追踪该领域的最新研究进展。无论是希望降低实验成本、快速验证想法的 AI 研究人员，还是受限于硬件资源但渴望应用大模型的开发者，都能从中找到极具价值的参考指引。通过借鉴这些高效方案，用户可以显著减少训练时间和算力消耗，同时保持优异的性能表现。如果你正致力于探索如何让大模型更轻量、更经济地落地应用，Awesome-Parameter-Efficient-Transfer-Learning 将是你不可或缺的知识宝库。","## \u003Cp align=center>𝓐𝔀𝓮𝓼𝓸𝓶𝓮 𝓟𝓪𝓻𝓪𝓶𝓮𝓽𝓮𝓻-𝓔𝓯𝓯𝓲𝓬𝓲𝓮𝓷𝓽 𝓣𝓻𝓪𝓷𝓼𝓯𝓮𝓻 𝓛𝓮𝓪𝓻𝓷𝓲𝓷𝓰\u003C\u002Fp>\n\u003Cdiv align=center>\n\n\u003Cp>\n\n ![GitHub stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning.svg?color=red&style=for-the-badge) \n ![GitHub forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning.svg?style=for-the-badge) \n ![GitHub activity](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning?color=yellow&style=for-the-badge) \n ![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning?style=for-the-badge)\n\n [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n [![Maintenance](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-yes-green.svg)](https:\u002F\u002FGitHub.com\u002FNaereen\u002FStrapDown.js\u002Fgraphs\u002Fcommit-activity)\n\u003C\u002Fp>\n\n𝓐 𝓬𝓸𝓵𝓵𝓮𝓬𝓽𝓲𝓸𝓷 𝓸𝓯 𝓻𝓮𝓼𝓸𝓾𝓻𝓬𝓮𝓼 𝓸𝓷 𝓹𝓪𝓻𝓪𝓶𝓮𝓽𝓮𝓻-𝓮𝓯𝓯𝓲𝓬𝓲𝓮𝓷𝓽 𝓽𝓻𝓪𝓷𝓼𝓯𝓮𝓻 𝓵𝓮𝓪𝓻𝓷𝓲𝓷𝓰.\n\n\u003C\u002Fdiv>\n\n## 📚 \u003Cspan id=\"head1\"> *Table of Contents* \u003C\u002Fspan>\n- [Introduction](#introduction)\n\n- [Keywords](#keywords)\n\n- [Papers](#papers)\n\n  - [Addition-based Tuning](#addition-based-tuning)\n    - [Adapter Tuning](#adapter-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-30-2EA9DF?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Prompt Tuning](#prompt-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-27-90B44B?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Prefix Tuning](#prefix-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-5-B481BB?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Side Tuning](#side-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-15-F9BF45?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n  - [Partial-based Tuning](#partial-based-tuning)\n    - [Specification Tuning](#specification-tuning)&emsp;&emsp;&emsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-8-E83015?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Reparameter Tuning](#reparameter-tuning)&ensp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-10-2EA9DF?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n  - [Unified Tuning](#unified-tuning)&ensp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-6-B481BB?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n- [Datasets](#datasets-of-visual-peft)\n\n- [Contribution](#contribution)\n\n## 📝 \u003Cspan id=\"head1\"> *Introduction* \u003C\u002Fspan>\n* **Parameter-Efficient Fine-Tuning (PEFT)** seeks to exceed the performance of full fine-tuning with minimal parameter modifications.\n* This repository provides a comprehensive overview and offer a systematic review of the latest advancements. It introduces a categorization criterion that classifies existing methods into three categories: **Addition-based Tuning, Partial-based Tuning, and Unified-based Tuning**.\n* This repository also introduces commonly used datasets and applications.\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsynbol_Awesome-Parameter-Efficient-Transfer-Learning_readme_820a9ecae4f6.png\" width=\"100%\" height=\"100%\">\n\u003C\u002Fdiv>\n\n## 💬 \u003Cspan id=\"head1\"> *Keywords* \u003C\u002Fspan>\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAbbreviation-blue) The abbreviation of the work.\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FApplication-green) The main explored task\u002Fapplication of the work.\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOther-orange) Other important information of the work.\n\n## 🐌 \u003Cspan id=\"head1\"> *Papers* \u003C\u002Fspan>\n### Addition-based Tuning\n### Adapter Tuning\n- **[1] AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition,** NeurIPS 2022.\n  \n  *Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, Ping Luo.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13535)][[Code](https:\u002F\u002Fgithub.com\u002FShoufaChen\u002FAdaptFormer)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdaptFormer-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green)  ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[2] Convolutional Bypasses are Better Vision Transformer Adapters,** Arxiv 2022.\n  \n  *Jie, Shibo and Deng, Zhi-Hong.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07039)][[Code](https:\u002F\u002Fgithub.com\u002FJieShibo\u002FPETL-ViT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConvpass-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDomain_Generalization-green)  ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[3] ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning,** NeurIPS 2022.\n  \n  *Pan, Junting and Lin, Ziyi and Zhu, Xiatian and Shao, Jing and Li, Hongsheng.*\n\n  [[Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Fa92e9165b22d4456fc6d87236e04c266-Abstract-Conference.html)][[Code](https:\u002F\u002Fgithub.com\u002Flinziyi96\u002FST-Adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FST_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green)  ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[4] AIM: Adapting Image Models for Efficient Video Action Recognition,** ICLR 2023.\n  \n  *Yang, Taojiannan and Zhu, Yi and Xie, Yusheng and Zhang, Aston and Chen, Chen and Li, Mu.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03024)][[Code](https:\u002F\u002Fadapt-image-models.github.io\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAIM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[5] Lossless Adaptation of Pretrained Vision Models For Robotic Manipulation,** ICLR 2023.\n  \n  *Sharma, Mohit and Fantacci, Claudio and Zhou, Yuxiang and Koppula, Skanda and others.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06600)][[Code](https:\u002F\u002Fsites.google.com\u002Fview\u002Frobo-adapters\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRobo_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRobotic_Manipulation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[6] 1% VS 100%: Parameter-Efficient Low Rank Adapter for Dense Predictions,** CVPR 2023.\n  \n  *Yin, Dongshuo and Yang, Yiran and Wang, Zhechao and Yu, Hongfeng and Wei, Kaiwen and Sun, Xian.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FYin_1_VS_100_Parameter-Efficient_Low_Rank_Adapter_for_Dense_Predictions_CVPR_2023_paper.html)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLoRand-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[7] Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks,** NeurIPS 2022.\n  \n  *Yen-Cheng Liu, Chih-Yao Ma, Junjiao Tian, Zijian He, Zsolt Kira.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03265)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPolyhistor-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[8] VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense Scene Understanding,** AAAI 2024.\n\n  *Yi Xin, Junlong Du, Qiang Wang, Zhiwen Lin, Ke Yan.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.08733)][[Code]()] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVMT_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[9] SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels,** IJCV 2023.\n\n  *Henry Hengyuan Zhao, Pichao Wang, Yuyang Zhao, Hao Luo, Fan Wang, Mike Zheng Shou.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07910)][[Code](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FSCT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSCT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[10] Important Channel Tuning,** Openreview 2023.\n\n  *Hengyuan Zhao, Pichao WANG, Yuyang Zhao, Fan Wang, Mike Zheng Shou.*\n\n  [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=TTMyoOdB9hZ)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FICT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[11] Revisit Parameter-Efficient Transfer Learning: A Two-Stage Paradigm,** Arxiv 2023.\n\n  *Zhao, Hengyuan and Luo, Hao and Zhao, Yuyang and Wang, Pichao and Wang, Fan and Shou, Mike Zheng.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07910)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTTC_Tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[12] Compacter: Efficient Low-Rank Hypercomplex Adapter Layer,** NeurIPS 2021.\n  \n  *Karimi Mahabadi, Rabeeh and Henderson, James and Ruder, Sebastian.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04647)][[Code](https:\u002F\u002Fgithub.com\u002Frabeehk\u002Fcompacter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCOMPACTER-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[13] Parameter-efficient and student-friendly knowledge distillation,** NeurIPS 2022.\n  \n  *Rao, Jun and Meng, Xv and Ding, Liang and Qi, Shuhan and Tao, Dacheng.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15308)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPESF_KD-blue)\n\n- **[14] VL-adapter: Parameter-efficient transfer learning for vision-and-language tasks,** CVPR 2022.\n  \n  *Sung, Yi-Lin and Cho, Jaemin and Bansal, Mohit.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.06825)][[Code](https:\u002F\u002Fgithub.com\u002Fylsung\u002FVL_adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVL_adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n  \n- **[15] UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling,** ICLR 2024.\n  \n  *Haoyu Lu, Mingyu Ding, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Masayoshi Tomizuka, Wei Zhan.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06605)][[Code](https:\u002F\u002Fgithub.com\u002FRERV\u002FUniAdapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUniAdapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[16] Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model,** Arxiv 2023.\n\n  *Zelin Peng, Zhengqin Xu, Zhilin Zeng, Lingxi Xie, Qi Tian, and Wei Shen.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.17112.pdf)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM_COBOT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[17] Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuning,** Arxiv 2023.\n\n  *Sanghyeon Kim, Hyunmo Yang, Younghyun Kim, Youngjoon Hong, Eunbyung Park.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06922)][[Code](https:\u002F\u002Fgithub.com\u002Fextremebird\u002FHydra\u002Ftree\u002Fmain)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHydra-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[18] MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering,** CVPR 2023.\n\n  *Jingjing Jiang, Nanning Zheng.*\n  \n  [[Paper](https:\u002F\u002Fweb3.arxiv.org\u002Fabs\u002F2303.01239)][[Code](https:\u002F\u002Fgithub.com\u002Fjingjing12110\u002FMixPHM)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMixPHM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[19] Vision Transformers are Parameter-Efficient Audio-Visual Learners,** CVPR 2023.\n\n  *Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.07983)][[Code](https:\u002F\u002Fgenjib.github.io\u002Fproject_page\u002FLAVISH\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAVISH-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[20] SAM-Adapter: Adapting Segment Anything in Underperformed Scenes,** ICCVW 2023.\n\n  *Chen, Tianrun and Zhu, Lanyun and Deng, Chaotao and Cao, Runlong and Wang, Yan and Zhang, Shangzhan and Li, Zejian and Sun, Lingyun and Zang, Ying and Mao, Papa.*\n  \n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023W\u002FVCL\u002Fpapers\u002FChen_SAM-Adapter_Adapting_Segment_Anything_in_Underperformed_Scenes_ICCVW_2023_paper.pdf)][[Code](http:\u002F\u002Fresearch.kokoni3d.com\u002Fsam-adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM-Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Segmentation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n  \n- **[21] T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models,** AAAI 2024.\n\n  *Mou, Chong and Wang, Xintao and Xie, Liangbin and Zhang, Jian and Qi, Zhongang and others.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.08453)][[Code](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FT2I-Adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FT2I_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText2Image-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[22] I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models,** Arxiv 2023.\n\n  *Guo, Xun and Zheng, Mingwu and Hou, Liang and Gao, Yuan and Deng, Yufan and others.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.16693)][[Code](https:\u002F\u002Fgithub.com\u002FI2V-Adapter\u002FI2V-Adapter-repo)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FI2V_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage2Video-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[23] AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image Restoration Models,** Arxiv 2023.\n\n  *Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.08881.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fcsguoh\u002FAdaptIR)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdaptIR-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSuper_Resolution-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[24] A Closer Look at Parameter-Efficient Tuning in Diffusion Models,** Arxiv 2023.\n\n  *Chendong Xiang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18181)][[Code](https:\u002F\u002Fgithub.com\u002FXiang-cd\u002Funet-finetune)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnet_Finetune-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGenerate_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[25] CAST: Cross-Attention in Space and Time for Video Action Recognition,** NeurIPS 2023.\n\n  *Lee, Dongho and Lee, Jongseo and Choi, Jinwoo.*\n  \n  [[Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Ffb1b83b35e96998ddfc0ce1dab635445-Paper-Conference.pdf)][[Code](https:\u002F\u002Fgithub.com\u002FKHU-VLL\u002FCAST)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCAST_Finetune-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Action_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[26] Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis,** CVPR 2024.\n\n  *Xin Zhou , Dingkang Liang , Wei Xu, Xingkui Zhu ,Yihan Xu, Zhikang Zou, Xiang Bai.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01439)][[Code](https:\u002F\u002Fgithub.com\u002FLMD0311\u002FDAPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDAPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_with_Prompt-orange)\n\n- **[27] MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation,** ArXiv 2024.\n\n  *Kunpeng Song and Yizhe Zhu and Bingchen Liu and Qing Yan and Ahmed Elgammal and Xiao Yang.*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.05674.pdf)][[Code](https:\u002F\u002Fmoma-adapter.github.io\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMoMA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPersonalized_Image_Generation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_with_Prompt-orange)\n\n- **[28] Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation,** ICCV 2023.\n\n  *Zunnan Xu, Zhihong Chen, Yong Zhang, Yibing Song, Xiang Wan, Guanbin Li.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FXu_Bridging_Vision_and_Language_Encoders_Parameter-Efficient_Tuning_for_Referring_Image_ICCV_2023_paper.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fkkakkkka\u002FETRIS)]\n\n- **[29] Enhancing Fine-grained Multi-modal Alignment via Adapters: A Parameter-Efficient Training Framework for Referring Image Segmentation,** WANT @ ICML 2024.\n\n  *Zunnan Xu, Jiaqi Huang, Ting Liu, Yong Liu, Haonan Han, Kehong Yuan, Xiu Li.*\n\n  [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=bp8xXLi2Mp)][[Code](https:\u002F\u002Fkkakkkka.github.io\u002Fdcris)] \n\n- **[30] Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and Inference,** ArXiv 2024.\n\n  *Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang, Yi Xin, Quanjun Yin.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.14700)][[Code](https:\u002F\u002Fgithub.com\u002Fliuting20\u002FSparse-Tuning)]\n\n- **[30] PAVE: Patching and Adapting Video Large Language Models,** CVPR 2025.\n\n  *Zhuoming Liu, Yiquan Li, Khoi Duc Nguyen, Yiwu Zhong, Yin Li.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19794)][[Code](https:\u002F\u002Fgithub.com\u002Fdragonlzm\u002FPAVE)]\n\n\n\n### Prompt Tuning\n- **[1] Visual Prompt Tuning,** ECCV 2022.\n  \n  *Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12119)][[Code](https:\u002F\u002Fgithub.com\u002Fkmnp\u002Fvpt)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[2] Visual Prompt Tuning for Test-time Domain Adaptation,** Arxiv 2022.\n  \n  *Gao, Yunhe and Shi, Xingjian and Zhu, Yi and Wang, Hao and Tang, Zhiqiang and Zhou, Xiong and others.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.04831.pdf)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDePT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTest_Time_Adaptation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[3] LPT: Long-tailed Prompt Tuning for Image Classification,** ICLR 2023.\n  \n  *Dong, Bowen and Zhou, Pan and Yan, Shuicheng and Zuo, Wangmeng.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01033)][[Code](https:\u002F\u002Fgithub.com\u002FDongSky\u002FLPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[4] Pro-tuning: Unified Prompt Tuning for Vision Tasks,** TCSVT 2023.\n  \n  *Nie, Xing and Ni, Bolin and Chang, Jianlong and Meng, Gaofeng and Huo, Chunlei and others.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.14381)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPro_tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[5] Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models,** ICCV 2023.\n  \n  *Zha, Yaohua and Wang, Jinpeng and Dai, Tao and Chen, Bin and Wang, Zhi and Xia, Shu-Tao.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.07221.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fzyh16143998882\u002FICCV23-IDPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FIDPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n- **[6] Visual Prompt Multi-Modal Tracking,** CVPR 2023.\n  \n  *Zhu, Jiawen and Lai, Simiao and Chen, Xin and Wang, Dong and Lu, Huchuan.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FZhu_Visual_Prompt_Multi-Modal_Tracking_CVPR_2023_paper.html)][[Code](https:\u002F\u002Fgithub.com\u002Fjiawen-zhu\u002FViPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FViPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Tracking-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[7] LION: Implicit Vision Prompt Tuning,** AAAI 2024.\n  \n  *Wang, Haixin and Chang, Jianlong and Luo, Xiao and Sun, Jinan and Lin, Zhouchen and Tian, Qi.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09992)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLION-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n- **[8] Convolutional Visual Prompt for Robust Visual Perception,** NeurIPS 2023.\n  \n  *Tsai, Yun-Yun and Mao, Chengzhi and Yang, Junfeng.*\n\n  [[Paper](https:\u002F\u002Fopenreview.net\u002Fforum?id=qgmrC8jhCo)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTest_Time_Adaptation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n\n- **[9] ProSFDA: Prompt Learning based Source-free Domain Adaptation for Medical Image Segmentation,** Arxiv 2023.\n  \n  *Hu, Shishuai and Liao, Zehui and Xia, Yong.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11514)][[Code](https:\u002F\u002Fgithub.com\u002FShishuaiHu\u002FProSFDA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProSFDA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n- **[10] Explicit Visual Prompting for Low-Level Structure Segmentations,** CVPR 2023.\n  \n  *Liu, Weihuang and Shen, Xi and Pun, Chi-Man and Cun, Xiaodong.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLiu_Explicit_Visual_Prompting_for_Low-Level_Structure_Segmentations_CVPR_2023_paper.html)][[Code](https:\u002F\u002Fgithub.com\u002FNiFangBaAGe\u002FExplicit-Visual-Prompt)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[11] P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting,** NeurIPS 2022.\n  \n  *Wang, Ziyi and Yu, Xumin and Rao, Yongming and Zhou, Jie and Lu, Jiwen.*\n\n  [[Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F5cd6dc946ccc37ae6c9f4fc6b6181e1d-Abstract-Conference.html)][[Code](https:\u002F\u002Fgithub.com\u002Fwangzy22\u002FP2P)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FP2P-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[12] Exploring Visual Prompts for Adapting Large-Scale Models,** Arxiv 2022.\n  \n  *Bahng, Hyojin and Jahanian, Ali and Sankaranarayanan, Swami and Isola, Phillip.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.17274)][[Code](https:\u002F\u002Fhjbahng.github.io\u002Fvisual_prompting\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[13] Unleashing the Power of Visual Prompting At the Pixel Level,** Arxiv 2023.\n  \n  *Wu, Junyang and Li, Xianhang and Wei, Chen and Wang, Huiyu and Yuille, Alan and Zhou, Yuyin and Xie, Cihang.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10556)][[Code](https:\u002F\u002Fgithub.com\u002FUCSC-VLAA\u002FEVP)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[14] Understanding and Improving Visual Prompting: A Label-Mapping Perspective,** CVPR 2023.\n  \n  *Chen, Aochuan and Yao, Yuguang and Chen, Pin-Yu and Zhang, Yihua and Liu, Sijia.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FChen_Understanding_and_Improving_Visual_Prompting_A_Label-Mapping_Perspective_CVPR_2023_paper.html)][[Code](https:\u002F\u002Fgithub.com\u002FOPTML-Group\u002FILM-VP)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FILM_VP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n\n- **[15] Learning to Prompt for Vision-Language Models,** IJCV 2022.\n  \n  *Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.01134)][[Code](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FCoOp)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCoOp-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText_Prompt-orange)\n\n- **[16] Hyperprompt: Prompt-based task-conditioning of transformers,** ICML 2022.\n  \n  *He, Yun and Zheng, Steven and Tay, Yi and Gupta, Jai and Du, Yu and Aribandi, Vamsi and others.*\n\n  [[Paper](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fhe22f.html)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHyperPrompt-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMulti_Task-green)\n\n- **[17] MaPLe: Multi-modal Prompt Learning,** CVPR 2023.\n  \n  *Khattak, Muhammad Uzair and Rasheed, Hanoona and Maaz, Muhammad and others.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03117)][[Code](https:\u002F\u002Fgithub.com\u002Fmuzairkhattak\u002Fmultimodal-prompt-learning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaPLe-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Prompt-orange)\n\n- **[18] Hierarchical Prompt Learning for Multi-Task Learning,** CVPR 2023.\n  \n  *Liu, Yajing and Lu, Yuning and Liu, Hao and An, Yaozu and Xu, Zhuoran and Yao, Zhuokun and others.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLiu_Hierarchical_Prompt_Learning_for_Multi-Task_Learning_CVPR_2023_paper.html)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHiPro-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText_Prompt-orange)\n\n- **[19] Visual Exemplar Driven Task-Prompting for Unified Perception in Autonomous Driving,** CVPR 2023.\n  \n  *Liang, Xiwen and Niu, Minzhe and Han, Jianhua and Xu, Hang and Xu, Chunjing and Liang, Xiaodan.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01788)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVE_Prompt-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMulti_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAutonomous_Driving-green)\n\n- **[20] Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model,** TMM 2023.\n  \n  *Xing, Yinghui and Wu, Qirui and Cheng, De and Zhang, Shizhou and Liang, Guoqiang and others.*\n\n  [[Paper](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10171397\u002F)][[Code](https:\u002F\u002Fgithub.com\u002Ffanrena\u002FDPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) \n\n- **[21] Tokenize Anything via Prompting,** Arxiv 2023.\n  \n  *Pan, Ting and Tang, Lulu and Wang, Xinlong and Shan, Shiguang.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.09128.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002Ftokenize-anything)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) \n\n- **[22] MmAP : Multi-modal Alignment Prompt for Cross-domain Multi-task Learning,** AAAI 2024.\n  \n  *Yi Xin, Junlong Du, Qiang Wang, Ke Yan, Shouhong Ding.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.08636)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMmAP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Prompt-orange)\n\n- **[23] Diversity-Aware Meta Visual Prompting,** CVPR 2023.\n  \n  *Qidong Huang, Xiaoyi Dong, Dongdong Chen, Weiming Zhang, Feifei Wang, Gang Hua, Nenghai Yu.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08138)][[Code](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08138)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDAM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[24] Cross-modal Prompts: Adapting Large Pre-trained Models for Audio-Visual Downstream Tasks,** NeurIPS 2023.\n  \n  *Duan, Haoyi and Xia, Yan and Mingze, Zhou and Tang, Li and Zhu, Jieming and Zhao, Zhou.*\n\n  [[Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Faf01716e08073368a7c8a62be46dba17-Paper-Conference.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fhaoyi-duan\u002FDG-SCT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDG-SCT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAudio-visual_Understanding-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Prompt-orange)\n  \n- **[25] Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models,** AAAI 2024.\n\n  *Yiwen Tang, Ray Zhang, Zoey Guo, Xianzheng Ma, Dong Wang, Zhigang Wang, Bin Zhao, Xuelong Li.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03059)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_PEFT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[26] E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning,** ICCV 2023.\n  \n  *Cheng, Han and Qifan, Wang and Yiming, Cui and Zhiwen, Cao and Wenguan, Wang and Siyuan, Qi and Dongfang, Liu*\n  \n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13770)][[Code](https:\u002F\u002Fgithub.com\u002FChengHan111\u002FE2VPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FE2VPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[27] DGL: Dynamic Global-Local Prompt Tuning for Text-Video Retrieval,** AAAI 2024.\n  \n  *Xiangpeng Yang and Linchao Zhu and Xiaohan Wang and Yi Yang*\n  \n  [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10588) [[code]](https:\u002F\u002Fgithub.com\u002Fknightyxp\u002FDGL) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDGL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText_Video_Retrieval-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGlobal_Local_Prompt_-orange)\n\n\n\n\n### Prefix Tuning\n- **[1] Prefix-Tuning: Optimizing Continuous Prompts for Generation,** ACL 2021.\n  \n  *Li, Xiang Lisa and Liang, Percy.* \n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.00190)][[Code](https:\u002F\u002Fgithub.com\u002FXiangLi1999\u002FPrefixTuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPrefix_Tuning-blue)\n\n- **[2] Towards a Unified View on Visual Parameter-Efficient Transfer Learning,** Arxiv 2023.\n  \n  *Yu, Bruce XB and Chang, Jianlong and Liu, Lingbo and Tian, Qi and Chen, Chang Wen.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00788)][[Code](https:\u002F\u002Fgithub.com\u002Fbruceyo\u002FV-PETL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FV_PETL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[3] Exploring Efficient Few-shot Adaptation for Vision Transformers,** TMLR 2023.\n  \n  *Xu, Chengming and Yang, Siqian and Wang, Yabiao and Wang, Zhanxiong and Fu, Yanwei and Xue, Xiangyang.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.02419.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fchmxu\u002FeTT_TMLR2022)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FeTT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFew_Shot_Learning-green)\n\n- **[4] Visual Query Tuning: Towards Effective Usage of Intermediate Representations for Parameter and Memory Efficient Transfer Learning,** CVPR 2023.\n  \n  *Tu, Cheng-Hao and Mai, Zheda and Chao, Wei-Lun.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FTu_Visual_Query_Tuning_Towards_Effective_Usage_of_Intermediate_Representations_for_CVPR_2023_paper.html)][[Code](https:\u002F\u002Fgithub.com\u002Fandytu28\u002FVQT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVQT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green)\n\n- **[5] A Unified Continual Learning Framework with General Parameter-Efficient Tuning,** ICCV 2023.\n  \n  *Tu, Cheng-Hao and Mai, Zheda and Chao, Wei-Lun.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.10070.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fgqk\u002FLAE)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContinua_lLearning-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n\n### Side Tuning\n- **[1] Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks,** ECCV 2020.\n  \n  *Zhang, Jeffrey O and Sax, Alexander and Zamir, Amir and Guibas, Leonidas and Malik, Jitendra.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.13503.pdf)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSide_Tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam_Efficient-orange)\n\n- **[2] LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning,** NeurIPS 2022.\n\n  *Sung, Yi-Lin and Cho, Jaemin and Bansal, Mohit.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.06522)][[Code](https:\u002F\u002Fgithub.com\u002Fylsung\u002FLadder-Side-Tuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLST-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[3] Vision transformer adapter for dense predictions.** ICLR 2023.\n\n  *Chen, Zhe and Duan, Yuchen and Wang, Wenhai and He, Junjun and Lu, Tong and Dai, Jifeng and Qiao, Yu.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.08534)][[Code](https:\u002F\u002Fgithub.com\u002Fczczup\u002FViT-Adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FViT_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam_Efficient-orange)\n\n- **[4] Side Adapter Network for Open-Vocabulary Semantic Segmentation,** CVPR 2023.\n  \n  *Xu, Mengde and Zhang, Zheng and Wei, Fangyun and Hu, Han and Bai, Xiang.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FXu_Side_Adapter_Network_for_Open-Vocabulary_Semantic_Segmentation_CVPR_2023_paper.html)][[Code](https:\u002F\u002Fmendelxu.github.io\u002FSAN\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAN-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam_Efficient-orange)\n\n- **[5] Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone,** NeurIPS 2023.\n  \n  *Jiang, Zeyinzi and Mao, Chaojie and Huang, Ziyuan and Ma, Ao and Lv, Yiliang and Shen, Yujun and Zhao, Deli and Zhou Jingren.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.19859.pdf)] [Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRes_Tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[6] DTL: Disentangled Transfer Learning for Visual Recognition,** AAAI 2024.\n  \n  *Fu, Minghao and Zhu, Ke and Wu, Jianxin.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07856)][[Code](https:\u002F\u002Fgithub.com\u002Fheekhero\u002FDTL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDTL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[7] Parameter-efficient is not sufficient: Exploring Parameter, Memory, and Time Efficient Adapter Tuning for Dense Predictions,** ACM MM 2024.\n  \n  *Yin, Dongshuo and Han, Xueting and Li, Bin and Feng, Hao and Bai, Jing.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09729)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FE3VA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[8] Ladder Fine-tuning approach for SAM integrating complementary network,** Arxiv 2023.\n  \n  *Chai, Shurong and Jain, Rahul Kumar and Teng, Shiyu and Liu, Jiaqing and Li, Yinhao and others.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12737)][[Code](https:\u002F\u002Fgithub.com\u002F11yxk\u002FSAM-LST)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM_LST-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[9] End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames,** CVPR 2024.\n  \n  *Liu, Shuming and Zhang, Chen-Lin and Zhao, Chen and Ghanem, Bernard.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.17241.pdf)] [Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdaTAD-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTemporal_Action_Detection-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n  \n- **[10] Time-, Memory- and Parameter-Efficient Visual Adaptation,** CVPR 2024.\n  \n  *Mercea, Otniel-Bogdan and Gritsenko, Alexey and  Schmid, Cordelia and Arnab, Anurag.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.02887.pdf)] [Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLoSA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_and_Video_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory&Time&Inference_Efficient-orange)\n\n- **[11] Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning,** ArXiv 2024.\n  \n  *Tang, Ningyuan and Fu, Minghao and Zhu, Ke and Wu, Jianxin.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.04009.pdf)] [Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAST-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[12] LoSA: Long-Short-range Adapter for Scaling End-to-End Temporal Action Localization,** ArXiv 2024.\n  \n  *Gupta, Akshita and Mittal, Gaurav and Magooda, Ahmed and Yu, Ye and Taylor, Graham W and Chen, Mei.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.01282.pdf)] [Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLoSA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTemporal_Action_Localization-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)\n\n- **[13] BarLeRIa: An Efficient Tuning Framework for Referring Image Segmentation,** ICLR 2024.\n  \n  *Wang, Yaoming and Li, Jin and ZHANG, XIAOPENG and Shi, Bowen and Li, Chenglin and Dai, Wenrui and Xiong, Hongkai and Tian, Qi.*\n\n  [[Paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=wHLDHRkmEu)] [[Code](https:\u002F\u002Fgithub.com\u002FNastrondAd\u002FBarLeRIa)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBarLeRIa-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReferring_Image_Segmentation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam_Efficient-orange)\n\n- **[14] UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory,** CVPR 2024.\n  \n  *Haiwen Diao, Bo Wan, Ying Zhang, Xu Jia, Huchuan Lu, Long Chen.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14316)] [[Code](https:\u002F\u002Fgithub.com\u002FParanioar\u002FUniPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUniPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVision_Language_&_GLUE_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)  \n\n- **[15] SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning,** ECCV 2024.\n  \n  *Haiwen Diao, Bo Wan, Xu Jia, Yunzhi Zhuge, Ying Zhang, Huchuan Lu, Long Chen.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07523)] [[Code](https:\u002F\u002Fgithub.com\u002FParanioar\u002FSHERL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSHERL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVision_Language_&_GLUE_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FParam&Memory_Efficient-orange)  \n\n\n### Partial-based Tuning\n### Specification Tuning\n- **[1] Do Better ImageNet Models Transfer Better?,** CVPR 2019.\n  \n  *Kornblith, Simon and Shlens, Jonathon and Le, Quoc V.*\n\n  [[Paper](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FKornblith_Do_Better_ImageNet_Models_Transfer_Better_CVPR_2019_paper.html)][[Code](https:\u002F\u002Fgithub.com\u002Flsh3163\u002FImagenet-Better)]\n\n- **[2] BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models.** ACL 2022.\n\n  *Zaken, Elad Ben and Ravfogel, Shauli and Goldberg, Yoav.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.10199.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fbenzakenelad\u002FBitFit)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBitFit-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBias_Tuning-orange)\n\n- **[3] Differentially Private Bias-Term only Fine-tuning of Foundation Models,** Arxiv 2022.\n  \n  *Bu, Zhiqi and Wang, Yu-Xiang and Zha, Sheng and Karypis, George.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00036)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDP_BiTFiT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBias_Tuning-orange)\n  \n- **[4] AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks,** NAACL 2022.\n  \n  *Fu, Chin-Lun and Chen, Zih-Ching and Lee, Yun-Ru and Lee, Hung-yi.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.00305)][[Code](https:\u002F\u002Fgithub.com\u002FAllen0307\u002FAdapterBias)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapterBias-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLayerNorm_Tuning-orange)\n  \n- **[5] Strong Baselines for Parameter Efficient Few-Shot Fine-tuning,** AAAI 2024.\n  \n  *Basu, Samyadeep and Massiceti, Daniela and Hu, Shell Xu and Feizi, Soheil.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01917)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLN_TUNE-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLayerNorm_Tuning-orange)\n\n- **[6] DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning,** ICCV 2023.\n  \n  *Enze Xie, Lewei Yao, Han Shi, Zhili Liu, Daquan Zhou, Zhaoqiang Liu, Jiawei Li, Zhenguo Li.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06648)][[Code](https:\u002F\u002Fgithub.com\u002Fmkshing\u002FDiffFit-pytorch)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffFit-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGenerate_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBias_Tuning-orange)\n\n- **[7] Gradient-based Parameter Selection for Efficient Fine-Tuning,** Arxiv 2023.\n  \n  *Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10136)][[Code]()] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGPS-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImportance_Parameter_Tuning-orange)\n\n- **[8] Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning,** ICCV 2023.\n  \n  *Haoyu He, Jianfei Cai, Jing Zhang, Dacheng Tao, Bohan Zhuang.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08566)][[Code](https:\u002F\u002Fgithub.com\u002Fziplab\u002FSPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImportance_Parameter_Tuning-orange)\n\n- **[9] Gradient-based Parameter Selection for Efficient Fine-Tuning,** CVPR 2024.\n  \n  *Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang.*\n  \n\n### Reparameter Tuning\n- **[1] LoRA: Low-Rank Adaptation of Large Language Models.** NeurIPS 2021.\n\n  *Hu, Edward J and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and others.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09685.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLoRA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLoRA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[2] Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning,** NeurIPS 2022.\n  \n  *Dongze Lian, Daquan Zhou, Jiashi Feng, Xinchao Wang.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08823)][[Code](https:\u002F\u002Fgithub.com\u002Fdongzelian\u002FSSF)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSSF-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMLP_Tuning-orange)\n\n- **[3] KronA: Parameter Efficient Tuning with Kronecker Adapter,** Arxiv 2023.\n  \n  *Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10650))][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FKronA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[4] FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer,** AAAI 2023.\n  \n  *Jie, Shibo and Deng, Zhi-Hong.*\n\n  [[Paper](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F25187)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFacT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorization_Decomposition-orange)\n\n- **[5] Aggregate, Decompose, and Fine-Tune: A Simple Yet Effective Factor-Tuning Method for Vision Transformer,** Arxiv 2023.\n  \n  *Chen, Dongping.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.06749)][[Code](https:\u002F\u002Fgithub.com\u002FDongping-Chen\u002FEFFT-EFfective-Factor-Tuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEFFT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorization_Decomposition-orange)\n\n- **[6] Strong Baselines for Parameter Efficient Few-Shot Fine-tuning,** AAAI 2024.\n  \n  *Basu, Samyadeep and Massiceti, Daniela and Hu, Shell Xu and Feizi, Soheil.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01917)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FATTNSCALE-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[7] Parameter-efficient Model Adaptation for Vision Transformers,** AAAI 2023.\n  \n  *He, Xuehai and Li, Chunyuan and Zhang, Pengchuan and Yang, Jianwei and Wang, Xin Eric.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.16329)][[Code](https:\u002F\u002Fgithub.com\u002Feric-ai-lab\u002FPEViT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FKAdaptation-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[8] DnA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment,** ECCV 2022.\n  \n  *Jiang, Ziyu and Chen, Tianlong and Chen, Xuxi and Cheng, Yu and Zhou, Luowei and Yuan, Lu and others.*\n\n  [[Paper](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-20044-1_14)][[Code](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FDnA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDnA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[9] Towards Efficient Visual Adaption via Structural Re-parameterization,** Arxiv 2023.\n  \n  *Luo, Gen and Huang, Minglang and Zhou, Yiyi and Sun, Xiaoshuai and Jiang, Guannan and Wang, Zhiyu and Ji, Rongrong.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.08106)][[Code](https:\u002F\u002Fgithub.com\u002Fluogen1996\u002FRepAdapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRepAdapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Reparameter-orange)\n\n- **[10]SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction,** AAAI 2024.\n  \n  *Zelin Peng, Zhengqin Xu, Zhilin Zeng, Xiaokang Yang, Wei Shen.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14604)][[Code]()] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM_PARSER-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[10]DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models,** Arxiv 2023.\n  \n  *Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17412)][[Code](https:\u002F\u002Fgithub.com\u002FIBM\u002FDiffuseKronA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffuseKronA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffusionModel-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n- **[11]Expanding Sparse Tuning for Low Memory Usage,** NeurIPS 2024.\n\n  *Shufan Shen, Junshu Sun, Xiangyang Ji, Qingming Huang, Shuhui Wang.*\n  \n  [[Paper](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F8c420176b45e923cf99dee1d7356a763-Paper-Conference.pdf)][[Code](https:\u002F\u002Fgithub.com\u002Fssfgunner\u002FSNELL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffuseKronA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffusionModel-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight_Tuning-orange)\n\n - **[12]PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning,** CVPR 2025.\n\n   *Song Wang, Xiaolu Liu, Lingdong Kong, Jianyun Xu, Chunyong Hu, Gongfan Fang, Wentong Li, Jianke Zhu, Xinchao Wang.*\n   \n   [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16023)][[Code](https:\u002F\u002Fgithub.com\u002Fsongw-zju\u002FPointLoRA)]\n   \n\n\n\n### Unified Tuning\n- **[1] Towards a Unified View of Parameter-Efficient Transfer Learning,** ICLR 2022.\n\n  *Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04366)][[Code](https:\u002F\u002Fgithub.com\u002Fjxhe\u002Funify-parameter-efficient-tuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[2] Towards a Unified View on Visual Parameter-Efficient Transfer Learning,** Arxiv 2023.\n\n  *Yu, Bruce XB and Chang, Jianlong and Liu, Lingbo and Tian, Qi and Chen, Chang Wen.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00788)][[Code](https:\u002F\u002Fgithub.com\u002Fbruceyo\u002FV-PETL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FV_PETL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[3] Neural Prompt Search,** Arxiv 2022.\n  \n  *Zhang, Yuanhan and Zhou, Kaiyang and Liu, Ziwei.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04673)][[Code](https:\u002F\u002Fgithub.com\u002FDavidzhangyuanhan\u002FNOAH)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNOAH-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n- **[4] Rethinking Efficient Tuning Methods from a Unified Perspective,** Arxiv 2023.\n  \n  *Jiang, Zeyinzi and Mao, Chaojie and Huang, Ziyuan and Lv, Yiliang and Zhao, Deli and Zhou, Jingren.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.00690.pdf)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FU_Tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[5] A Unified Continual Learning Framework with General Parameter-Efficient Tuning,** ICCV 2023.\n  \n  *Gao, Qiankun and Zhao, Chen and Sun, Yifan and Xi, Teng and Zhang, Gang and Ghanem, Bernard and Zhang, Jian.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10070)][[Code](https:\u002F\u002Fgithub.com\u002Fgqk\u002FLAE)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContinual_Learning-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n- **[6] GIST: Improving Parameter Efficient Fine Tuning via Knowledge Interaction,** Arxiv 2023.\n\n  *Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Suncheng Xiang, Zefang Yu, Ting Liu, Yuzhuo Fu.*\n\n  [[Paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.07255.pdf)][Code] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGIST-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n## 🎯 \u003Cspan id=\"head1\"> *Datasets of Visual PETL* \u003C\u002Fspan>\n| Name | Paper | Link | Notes |\n|:-----|:-----:|:----:|:-----:|\n| **FGVC** | [Visual prompt tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12119) | [Link](https:\u002F\u002Fcornell.app.box.com\u002Fv\u002Fvptfgvcsplits) | FGVC consists of 5 benchmarked Fine-Grained Visual Classification tasks. |\n| **VTAB-1k** | [A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.04867) | [Link](https:\u002F\u002Fcornell.app.box.com\u002Fv\u002Fvptfgvcsplits) | VTAB-1k consists of 19 diverse visual classification tasks.|\n| **Kinetics-400** | [The kinetics human action video dataset.](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.06950) | [Link](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F11US3KptpqHsZ5K4wQLzs-OA3Y50OWtPJ\u002Fview?usp=sharing) | Video Action Recognition|\n| **SSv2** | [The “something something” Video Database for Learning and Evaluating Visual Common Sense](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04261) | [Link](https:\u002F\u002Fdeveloper.qualcomm.com\u002Fsoftware\u002Fai-datasets\u002Fsomething-something) | Video Action Recognition|\n| **HMDB51** | [HMDB:ALargeVideo Database for Human Motion Recognition](http:\u002F\u002Fcbcl.mit.edu\u002Fpublications\u002Fps\u002FKuehne_etal_iccv11.pdf) | [Link](http:\u002F\u002Fserre-lab.clps.brown.edu\u002Fresource\u002Fhmdb-a-large-human-motion-database\u002F) | Video Action Recognition|\n| **Diving-48** | [RESOUND: Towards Action Recognition without Representation Bias](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FYingwei_Li_RESOUND_Towards_Action_ECCV_2018_paper.pdf) | [Link](http:\u002F\u002Fwww.svcl.ucsd.edu\u002Fprojects\u002Fresound\u002Fdataset.html) | Video Action Recognition|\n| **UCF-101** | [UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild](https:\u002F\u002Farxiv.org\u002Fabs\u002F1212.0402) | [Link](https:\u002F\u002Fwww.crcv.ucf.edu\u002Fdata\u002FUCF101.php) | Video Action Recognition|\n| **MSCOCO** | [Microsoft COCO: Common Objects in Context](https:\u002F\u002Farxiv.org\u002Fabs\u002F1405.0312) | [Link](http:\u002F\u002Fcocodataset.org\u002F) | Instance Segmentation|\n| **ADE20K** | [Semantic Understanding of Scenes through the ADE20K Dataset](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.05442) | [Link](http:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002Fdatasets\u002FADE20K\u002F) | Semantic Segmentation|\n| **PASCALVOC** | [The Pascal Visual Object Classes Challenge: A Retrospective](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FThe-Pascal-Visual-Object-Classes-Challenge%3A-A-Everingham-Eslami\u002F616b246e332573af1f4859aa91440280774c183a) | [Link](https:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002Fvoc2012\u002F) | Semantic Segmentation|\n\n## 🧒 \u003Cspan id=\"head1\"> *Contribution* \u003C\u002Fspan>\n\n\u003C!-- Copy-paste in your Readme.md file -->\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsynbol_Awesome-Parameter-Efficient-Transfer-Learning_readme_f1d38ecfbda0.png\" \u002F>\n\u003C\u002Fa>\n\n### :clap: Thanks to the above contributors for this excellent work！\n\n\n## ⭐ \u003Cspan id=\"head1\"> *Citation* \u003C\u002Fspan>\n\nIf you find our survey and repository useful for your research, please cite it below:\n\n```bibtex\n\n@article{xin2024parameter,\n  title={Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey},\n  author={Xin, Yi and Luo, Siqi and Zhou, Haodi and Du, Junlong and Liu, Xiaohong and Fan, Yue and Li, Qing and Du, Yuntao},\n  journal={arXiv preprint arXiv:2402.02242},\n  year={2024}\n}\n\n```\n","## \u003Cp align=center>𝓐𝔀𝓮𝓼𝓸𝓶𝓮 𝓟𝓪𝓻𝓪𝓶𝓮𝓽𝓮𝓻-𝓔𝓯𝓯𝓲𝓬𝓲𝓮𝓷𝓽 𝓣𝓻𝓪𝓷𝓼𝓯𝓮𝓻 𝓛𝓮𝓪𝓻𝓷𝓲𝓷𝓰\u003C\u002Fp>\n\u003Cdiv align=center>\n\n\u003Cp>\n\n ![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning.svg?color=red&style=for-the-badge) \n ![GitHub 分支](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning.svg?style=for-the-badge) \n ![GitHub 活动](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning?color=yellow&style=for-the-badge) \n ![GitHub 问题](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning?style=for-the-badge)\n\n [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n [![维护](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-yes-green.svg)](https:\u002F\u002FGitHub.com\u002FNaereen\u002FStrapDown.js\u002Fgraphs\u002Fcommit-activity)\n\u003C\u002Fp>\n\n关于参数高效迁移学习的资源集合。\n\n\u003C\u002Fdiv>\n\n## 📚 \u003Cspan id=\"head1\"> *目录* \u003C\u002Fspan>\n- [简介](#introduction)\n\n- [关键词](#keywords)\n\n- [论文](#papers)\n\n- [Addition-based Tuning](#addition-based-tuning)\n    - [Adapter Tuning](#adapter-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-30-2EA9DF?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Prompt Tuning](#prompt-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-27-90B44B?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Prefix Tuning](#prefix-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-5-B481BB?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Side Tuning](#side-tuning)&emsp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-15-F9BF45?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n  - [Partial-based Tuning](#partial-based-tuning)\n    - [Specification Tuning](#specification-tuning)&emsp;&emsp;&emsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-8-E83015?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n    - [Reparameter Tuning](#reparameter-tuning)&ensp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-10-2EA9DF?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n  - [Unified Tuning](#unified-tuning)&ensp;&emsp;&nbsp;\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNumber%20of%20Papers-6-B481BB?style=flat-square&logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyBpZD0iQ2FwYV8xIiBlbmFibGUtYmFja2dyb3VuZD0ibmV3IDAgMCA1MTIgNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIHdpZHRoPSI1MTIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+PGc+PHBhdGggZD0ibTM5NS44MiAxODIuNjE2LTE4OC43MiAxODguNzItMTIuOTEgMS43Mi05LjM1IDIwLjU0LTM0LjMxIDM0LjMxLTExLjAxLS43My0xMS4yNSAyMi45OS01Ni40OCA1Ni40OGMtMi45MyAyLjkzLTYuNzcgNC4zOS0xMC42MSA0LjM5cy03LjY4LTEuNDYtMTAuNjEtNC4zOWwtMjIuNjItMjIuNjJoLS4wMWwtMjIuNjItMjIuNjNjLTUuODYtNS44Ni01Ljg2LTE1LjM2IDAtMjEuMjJsNzcuNjMtNzcuNjMgMTYuNi03LjAzIDUuNjYtMTUuMjMgMzQuMzEtMzQuMzEgMTQuODQtNC45MiA3LjQyLTE3LjM0IDE2Ny41Ny0xNjcuNTcgMzMuMjQgMzMuMjR6IiBmaWxsPSIjZjY2Ii8+PHBhdGggZD0ibTM5NS44MiAxMTYuMTQ2djY2LjQ3bC0xODguNzIgMTg4LjcyLTEyLjkxIDEuNzItOS4zNSAyMC41NC0zNC4zMSAzNC4zMS0xMS4wMS0uNzMtMTEuMjUgMjIuOTktNTYuNDggNTYuNDhjLTIuOTMgMi45My02Ljc3IDQuMzktMTAuNjEgNC4zOXMtNy42OC0xLjQ2LTEwLjYxLTQuMzlsLTIyLjYyLTIyLjYyIDMzNC42NC0zMzQuNjR6IiBmaWxsPSIjZTYyZTZiIi8+PHBhdGggZD0ibTUwNi42MSAyMDkuMDA2LTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xNy00LjUyLTQuNTMtMTEuNDItNS42OC0xNy4xNy0yLjg4bC04OC4zOCA0My4wNS02OS4xMy02OS4xNGMtNC4zNS00LjM1LTEwLjkyLTUuNi0xNi41Ni0zLjE2LTUuNjUgMi40NS05LjIzIDguMDktOS4wNCAxNC4yNGwyLjg2IDkwLjQ1LTg1LjM3IDU3LjgzYy00LjkxIDMuMzItNy40IDkuMjItNi4zNiAxNS4wNCAxLjA0IDUuODMgNS40IDEwLjUxIDExLjE1IDExLjk0bDk2LjYyIDI0LjAxIDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZ6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTI5Ni4yNiAyMTUuNzA2IDI0LjAxIDk2LjYyYzEuNDMgNS43NSA2LjExIDEwLjExIDExLjk0IDExLjE1Ljg3LjE2IDEuNzUuMjMgMi42Mi4yMyA0LjkyIDAgOS42LTIuNDIgMTIuNDItNi41OWw1Ny44My04NS4zNyA5MC40NSAyLjg2YzYuMTQuMTkgMTEuNzktMy4zOSAxNC4yNC05LjA0IDIuNDQtNS42NCAxLjE5LTEyLjIxLTMuMTYtMTYuNTZsLTY5LjE0LTY5LjEzIDQzLjA1LTg4LjM4YzIuOC01Ljc1IDEuNjUtMTIuNjUtMi44OC0xNy4xN3oiIGZpbGw9IiNmZDkwMjUiLz48cGF0aCBkPSJtNDY1IDQxNi45NjZjLTI1LjkyIDAtNDcgMjEuMDgtNDcgNDdzMjEuMDggNDcgNDcgNDcgNDctMjEuMDggNDctNDctMjEuMDgtNDctNDctNDd6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTEwNCAyOC45NjZoLTEzdi0xM2MwLTguMjg0LTYuNzE2LTE1LTE1LTE1cy0xNSA2LjcxNi0xNSAxNXYxM2gtMTNjLTguMjg0IDAtMTUgNi43MTYtMTUgMTVzNi43MTYgMTUgMTUgMTVoMTN2MTNjMCA4LjI4NCA2LjcxNiAxNSAxNSAxNXMxNS02LjcxNiAxNS0xNXYtMTNoMTNjOC4yODQgMCAxNS02LjcxNiAxNS0xNXMtNi43MTYtMTUtMTUtMTV6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTIwNy4xIDM3MS4zMzYtMjIuMjYgMjIuMjYtNDUuMzItODcuNjIgMjIuMjYtMjIuMjZ6IiBmaWxsPSIjZmVkODQzIi8+PHBhdGggZD0ibTE4NC44NCAzOTMuNTk2IDIyLjI2LTIyLjI2LTIyLjY2LTQzLjgxLTIyLjI2NSAyMi4yNjV6IiBmaWxsPSIjZmFiZTJjIi8+PHBhdGggZD0ibTE1MC41MyA0MjcuOTA2LTIyLjI2IDIyLjI2LTQ1LjMyLTg3LjYyIDIyLjI2LTIyLjI2eiIgZmlsbD0iI2ZlZDg0MyIvPjxwYXRoIGQ9Im0xMjguMjcgNDUwLjE2NiAyMi4yNi0yMi4yNi0yMi42NTUtNDMuODE1LTIyLjI2IDIyLjI2eiIgZmlsbD0iI2ZhYmUyYyIvPjxjaXJjbGUgY3g9IjE1IiBjeT0iMTE5Ljk2OSIgZmlsbD0iIzVlZDhkMyIgcj0iMTUiLz48Y2lyY2xlIGN4PSIxMjgiIGN5PSIxOTkuOTY5IiBmaWxsPSIjZDU5OWVkIiByPSIxNSIvPjxjaXJjbGUgY3g9IjE5MiIgY3k9IjYzLjk2NCIgZmlsbD0iI2Y2NiIgcj0iMTUiLz48Y2lyY2xlIGN4PSIzMjgiIGN5PSI0MTUuOTY3IiBmaWxsPSIjMzFiZWJlIiByPSIxNSIvPjxjaXJjbGUgY3g9IjQ0MCIgY3k9IjMyNy45NjciIGZpbGw9IiNhZDc3ZTMiIHI9IjE0Ljk5OSIvPjwvZz48L3N2Zz4=\" alt=\"PaperNum\"\u002F>\n  \n- [Datasets](#datasets-of-visual-peft)\n\n- [贡献](#contribution)\n\n\n\n## 📝 \u003Cspan id=\"head1\"> *简介* \u003C\u002Fspan>\n* **参数高效微调（PEFT）** 旨在以最少的参数修改，超越全量微调的性能。\n* 本仓库提供了一个全面的综述，并系统性地回顾了最新的进展。它提出了一种分类标准，将现有方法划分为三类：**基于添加的微调、基于部分的微调和基于统一的微调**。\n* 此外，本仓库还介绍了常用的数据集和应用场景。\n\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsynbol_Awesome-Parameter-Efficient-Transfer-Learning_readme_820a9ecae4f6.png\" width=\"100%\" height=\"100%\">\n\u003C\u002Fdiv>\n\n## 💬 \u003Cspan id=\"head1\"> *关键词* \u003C\u002Fspan>\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAbbreviation-blue) 该工作的缩写。\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FApplication-green) 该工作主要探索的任务\u002F应用。\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOther-orange) 该工作其他重要信息。\n\n## 🐌 \u003Cspan id=\"head1\"> *论文* \u003C\u002Fspan>\n### 基于添加的微调\n### Adapter 微调\n- **[1] AdaptFormer: 用于可扩展视觉识别的视觉 Transformer 自适应模型，** NeurIPS 2022。\n  \n  *陈守法、葛崇健、佟展、王江柳、宋一兵、王珏、罗平。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13535)][[代码](https:\u002F\u002Fgithub.com\u002FShoufaChen\u002FAdaptFormer)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdaptFormer-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green)  ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[2] 卷积旁路是更好的视觉 Transformer Adapter，** Arxiv 2022。\n  \n  *Jie, Shibo 和 Deng, Zhi-Hong。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07039)][[代码](https:\u002F\u002Fgithub.com\u002FJieShibo\u002FPETL-ViT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConvpass-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDomain_Generalization-green)  ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[3] ST-Adapter: 参数高效的图像到视频迁移学习，** NeurIPS 2022。\n  \n  *潘俊廷、林子怡、朱夏田、邵静、李洪生。*\n\n  [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Fa92e9165b22d4456fc6d87236e04c266-Abstract-Conference.html)][[代码](https:\u002F\u002Fgithub.com\u002Flinziyi96\u002FST-Adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FST_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green)  ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[4] AIM: 用于高效视频动作识别的图像模型自适应，** ICLR 2023。\n  \n  *杨涛建南、朱毅、谢宇升、张阿斯顿、陈晨、李牧。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.03024)][[代码](https:\u002F\u002Fadapt-image-models.github.io\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAIM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[5] 预训练视觉模型在机器人操作中的无损自适应，** ICLR 2023。\n  \n  *夏尔马、范塔奇、周宇翔、科普拉以及其他研究人员。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06600)][[代码](https:\u002F\u002Fsites.google.com\u002Fview\u002Frobo-adapters\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRobo_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRobotic_Manipulation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[6] 1% VS 100%: 用于密集预测的参数高效低秩 Adapter，** CVPR 2023。\n  \n  *尹东硕、杨艺然、王哲超、余洪峰、魏凯文、孙贤。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FYin_1_VS_100_Parameter-Efficient_Low_Rank_Adapter_for_Dense_Predictions_CVPR_2023_paper.html)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLoRand-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[7] Polyhistor: 用于密集视觉任务的参数高效多任务自适应，** NeurIPS 2022。\n  \n  *刘延成、马志尧、田俊娇、何子健、齐索特·基拉。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03265)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPolyhistor-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[8] VMT-Adapter: 用于多任务密集场景理解的参数高效迁移学习，** AAAI 2024。\n  \n  *易欣、杜俊龙、王强、林志文、严科。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.08733)][[代码]()] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVMT_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[9] SCT: 通过显著通道实现参数高效微调的简单基线，** IJCV 2023。\n  \n  *亨利·恒远·赵、皮超·王、宇洋·赵、浩·罗、凡·王、迈克·郑寿。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07910)][[代码](https:\u002F\u002Fgithub.com\u002Fshowlab\u002FSCT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSCT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[10] 重要通道微调，** Openreview 2023。\n  \n  *恒远·赵、皮超·王、宇洋·赵、凡·王、迈克·郑寿。*\n\n  [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=TTMyoOdB9hZ)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FICT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[11] 重访参数高效迁移学习：一种两阶段范式，** Arxiv 2023。\n  \n  *赵恒远、罗浩、赵宇洋、王皮超、王凡、郑寿迈克。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07910)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTTC_Tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[12] Compacter: 高效低秩超复数 Adapter 层，** NeurIPS 2021。\n  \n  *卡里米·马哈巴迪、拉比赫、亨德森、詹姆斯、鲁德尔、塞巴斯蒂安。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04647)][[代码](https:\u002F\u002Fgithub.com\u002Frabeehk\u002Fcompacter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCOMPACTER-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[13] 参数高效且对学生友好的知识蒸馏，** NeurIPS 2022。\n  \n  *饶俊、孟旭、丁亮、齐书涵、陶大成。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.15308)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPESF_KD-blue)\n\n- **[14] VL-adapter: 用于视觉-语言任务的参数高效迁移学习，** CVPR 2022。\n  \n  *Sung, Yi-Lin、Cho, Jaemin、Bansal, Mohit。*\n\n[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.06825)][[代码](https:\u002F\u002Fgithub.com\u002Fylsung\u002FVL_adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVL_adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n  \n- **[15] UniAdapter: 用于跨模态建模的统一参数高效迁移学习，** ICLR 2024。\n  \n  *陆浩宇、丁明宇、霍宇奇、杨国兴、卢志武、富冢正芳、詹伟。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06605)][[代码](https:\u002F\u002Fgithub.com\u002FRERV\u002FUniAdapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUniAdapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[16] 基于跨块编排的参数高效微调方法应用于 Segment Anything Model，** Arxiv 2023。\n\n  *彭泽林、徐正钦、曾志林、谢凌熙、田琦、沈伟。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.17112.pdf)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM_COBOT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[17] Hydra: 用于参数高效微调的多头低秩适配器，** Arxiv 2023。\n\n  *金相贤、梁贤模、金英贤、洪英俊、朴恩炳。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06922)][[代码](https:\u002F\u002Fgithub.com\u002Fextremebird\u002FHydra\u002Ftree\u002Fmain)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHydra-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[18] MixPHM: 面向低资源视觉问答任务的冗余感知型参数高效微调方法，** CVPR 2023。\n\n  *蒋静静、郑南宁。*\n  \n  [[论文](https:\u002F\u002Fweb3.arxiv.org\u002Fabs\u002F2303.01239)][[代码](https:\u002F\u002Fgithub.com\u002Fjingjing12110\u002FMixPHM)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMixPHM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Optimization-orange)\n\n- **[19] 视觉Transformer是高效的音视频学习模型，** CVPR 2023。\n\n  *林延博、宋怡琳、雷杰、班萨尔、贝尔塔修斯。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.07983)][[代码](https:\u002F\u002Fgenjib.github.io\u002Fproject_page\u002FLAVISH\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAVISH-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCross_Modal-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[20] SAM-Adapter: 在表现欠佳场景下对 Segment Anything 的适配，** ICCVW 2023。\n\n  *陈天润、朱兰云、邓超涛、曹润龙、王岩、张尚展、李泽健、孙凌云、臧颖、毛爸爸。*\n  \n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023W\u002FVCL\u002Fpapers\u002FChen_SAM-Adapter_Adapting_Segment_Anything_in_Underperformed_Scenes_ICCVW_2023_paper.pdf)][[代码](http:\u002F\u002Fresearch.kokoni3d.com\u002Fsam-adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM-Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Segmentation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n  \n- **[21] T2I-Adapter: 学习适配器以挖掘文本到图像扩散模型的更多可控能力，** AAAI 2024。\n\n  *牟冲、王新涛、谢良斌、张健、齐中刚等。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.08453)][[代码](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FT2I-Adapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FT2I_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText2Image-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[22] I2V-Adapter: 一种用于视频扩散模型的通用图像到视频适配器，** Arxiv 2023。\n\n  *郭勋、郑明武、侯亮、高远、邓宇凡等。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.16693)][[代码](https:\u002F\u002Fgithub.com\u002FI2V-Adapter\u002FI2V-Adapter-repo)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FI2V_Adapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage2Video-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[23] AdaptIR: 针对预训练图像修复模型的参数高效多任务适配，** Arxiv 2023。\n\n  *郭航、戴涛、白元朝、陈彬、夏树涛、朱泽轩。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.08881.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fcsguoh\u002FAdaptIR)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdaptIR-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSuper_Resolution-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[24] 深入探讨扩散模型中的参数高效微调，** Arxiv 2023。\n\n  *项晨东、鲍帆、李崇轩、苏航、朱军。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18181)][[代码](https:\u002F\u002Fgithub.com\u002FXiang-cd\u002Funet-finetune)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnet_Finetune-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGenerate_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[25] CAST: 用于视频动作识别的时空交叉注意力机制，** NeurIPS 2023。\n\n  *李东浩、李宗瑞、崔振宇。*\n  \n  [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Ffb1b83b35e96998ddfc0ce1dab635445-Paper-Conference.pdf)][[代码](https:\u002F\u002Fgithub.com\u002FKHU-VLL\u002FCAST)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCAST_Finetune-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Action_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_Design-orange)\n\n- **[26] 动态适配器与提示微调结合：用于点云分析的参数高效迁移学习，** CVPR 2024。\n\n  *周鑫、梁定康、许伟、朱星奎、徐一涵、邹志康、白翔。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01439)][[代码](https:\u002F\u002Fgithub.com\u002FLMD0311\u002FDAPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDAPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_with_Prompt-orange)\n\n- **[27] MoMA: 多模态大语言模型适配器，用于快速个性化图像生成，** ArXiv 2024。\n\n  *宋鲲鹏、朱义哲、刘冰辰、严青、艾哈迈德·埃尔加马尔、杨晓。*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.05674.pdf)][[代码](https:\u002F\u002Fmoma-adapter.github.io\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMoMA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPersonalized_Image_Generation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapter_with_Prompt-orange)\n\n- **[28] 沟通视觉与语言编码器：用于引用式图像分割的参数高效微调，** ICCV 2023。\n\n  *徐尊楠、陈志宏、张勇、宋一兵、万翔、李冠斌。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FXu_Bridging_Vision_and_Language_Encoders_Parameter-Efficient_Tuning_for_Referring_Image_ICCV_2023_paper.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fkkakkkka\u002FETRIS)]\n\n- **[29] 通过适配器增强细粒度多模态对齐：用于引用式图像分割的参数高效训练框架，** WANT @ ICML 2024。\n\n  *徐尊楠、黄佳琪、刘婷、刘勇、韩浩南、袁克洪、李秀。*\n  \n  [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=bp8xXLi2Mp)][[代码](https:\u002F\u002Fkkakkkka.github.io\u002Fdcris)]\n\n- **[30] 稀疏微调：通过高效微调和推理适应视觉Transformer模型，** ArXiv 2024。\n\n  *刘婷、刘旭阳、石亮涛、徐尊南、黄思腾、辛毅、殷全军。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2405.14700)][[代码](https:\u002F\u002Fgithub.com\u002Fliuting20\u002FSparse-Tuning)]\n\n- **[30] PAVE：视频大型语言模型的补丁与适配，** CVPR 2025。\n\n  *刘卓明、李一泉、阮辉德、钟义武、李音。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19794)][[代码](https:\u002F\u002Fgithub.com\u002Fdragonlzm\u002FPAVE)]\n\n\n\n\n\n### 提示词微调\n- **[1] 视觉提示词微调，** ECCV 2022。\n  \n  *贾梦琳、唐路明、陈博淳、克莱尔·卡迪、塞尔日·贝隆吉、巴拉特·哈里哈兰、林世南。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12119)][[代码](https:\u002F\u002Fgithub.com\u002Fkmnp\u002Fvpt)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[2] 针对测试时领域适应的视觉提示词微调，** Arxiv 2022。\n  \n  *高云鹤、史兴健、朱毅、王浩、唐志强、周雄等。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.04831.pdf)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDePT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTest_Time_Adaptation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[3] LPT：用于图像分类的长尾提示词微调，** ICLR 2023。\n  \n  *董博文、周攀、严水成、左望蒙。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01033)][[代码](https:\u002F\u002Fgithub.com\u002FDongSky\u002FLPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[4] Pro-tuning：面向视觉任务的统一提示词微调，** TCSVT 2023。\n  \n  *聂星、倪波林、常建龙、孟高锋、霍春雷等。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.14381)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPro_tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[5] 面向预训练点云模型的实例感知动态提示词微调，** ICCV 2023。\n  \n  *查耀华、王金鹏、戴涛、陈斌、王志、夏树涛。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.07221.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fzyh16143998882\u002FICCV23-IDPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FIDPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n- **[6] 视觉提示词多模态跟踪，** CVPR 2023。\n  \n  *朱嘉文、赖思淼、陈欣、王东、陆虎川。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FZhu_Visual_Prompt_Multi-Modal_Tracking_CVPR_2023_paper.html)][[代码](https:\u002F\u002Fgithub.com\u002Fjiawen-zhu\u002FViPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FViPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Tracking-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[7] LION：隐式视觉提示词微调，** AAAI 2024。\n  \n  *王海鑫、常建龙、罗晓、孙济南、林周晨、田琪。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09992)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLION-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n- **[8] 用于鲁棒视觉感知的卷积视觉提示词，** NeurIPS 2023。\n  \n  *蔡韵韵、毛成志、杨俊峰。*\n\n  [[论文](https:\u002F\u002Fopenreview.net\u002Fforum?id=qgmrC8jhCo)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTest_Time_Adaptation-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n\n- **[9] ProSFDA：基于提示学习的无源域适应方法，用于医学图像分割，** Arxiv 2023。\n  \n  *胡士帅、廖泽辉、夏勇。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11514)][[代码](https:\u002F\u002Fgithub.com\u002FShishuaiHu\u002FProSFDA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProSFDA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n\n- **[10] 针对低级结构分割的显式视觉提示词，** CVPR 2023。\n  \n  *刘伟煌、沈曦、潘志民、存晓东。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLiu_Explicit_Visual_Prompting_for_Low-Level_Structure_Segmentations_CVPR_2023_paper.html)][[代码](https:\u002F\u002Fgithub.com\u002FNiFangBaAGe\u002FExplicit-Visual-Prompt)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[11] P2P：利用点到像素提示词微调预训练图像模型以进行点云分析，** NeurIPS 2022。\n  \n  *王子怡、于旭敏、饶永明、周杰、陆继文。*\n\n  [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F5cd6dc946ccc37ae6c9f4fc6b6181e1d-Abstract-Conference.html)][[代码](https:\u002F\u002Fgithub.com\u002Fwangzy22\u002FP2P)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FP2P-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[12] 探索用于大规模模型适配的视觉提示词，** Arxiv 2022。\n  \n  *方孝珍、贾哈尼安、桑卡拉纳拉亚南、伊索拉。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.17274)][[代码](https:\u002F\u002Fhjbahng.github.io\u002Fvisual_prompting\u002F)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[13] 在像素级别释放视觉提示词的力量，** Arxiv 2023。\n  \n  *吴俊洋、李贤航、魏辰、王慧宇、尤伊尔、周雨寅、谢慈航。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10556)][[代码](https:\u002F\u002Fgithub.com\u002FUCSC-VLAA\u002FEVP)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEVP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[14] 从标签映射视角理解并改进视觉提示词，** CVPR 2023。\n  \n  *陈奥川、姚玉光、陈品宇、张义华、刘思佳。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FChen_Understanding_and_Improving_Visual_Prompting_A_Label-Mapping_Perspective_CVPR_2023_paper.html)][[代码](https:\u002F\u002Fgithub.com\u002FOPTML-Group\u002FILM-VP)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FILM_VP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n\n- **[15] 学习为视觉-语言模型生成提示词，** IJCV 2022。\n  \n  *周凯阳、杨景康、洛伊陈昌、刘子威。*\n\n[[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.01134)][[代码](https:\u002F\u002Fgithub.com\u002FKaiyangZhou\u002FCoOp)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCoOp-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText_Prompt-orange)\n\n- **[16] Hyperprompt：基于提示的任务条件化变压器模型，** ICML 2022。\n  \n  *何云、郑史蒂文、泰伊、古普塔·贾伊、杜宇、阿里班迪·万西等。*\n\n  [[论文](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fhe22f.html)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHyperPrompt-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMulti_Task-green)\n\n- **[17] MaPLe：多模态提示学习，** CVPR 2023。\n  \n  *哈塔克·穆罕默德·乌宰尔、拉希德·哈努娜、马兹·穆罕默德等。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03117)][[代码](https:\u002F\u002Fgithub.com\u002Fmuzairkhattak\u002Fmultimodal-prompt-learning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaPLe-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Prompt-orange)\n\n- **[18] 多任务学习中的层次化提示学习，** CVPR 2023。\n  \n  *刘亚静、陆雨宁、刘浩、安耀祖、徐卓然、姚卓坤等。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FLiu_Hierarchical_Prompt_Learning_for_Multi-Task_Learning_CVPR_2023_paper.html)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHiPro-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText_Prompt-orange)\n\n- **[19] 自动驾驶中统一感知的视觉示例驱动任务提示，** CVPR 2023。\n  \n  *梁锡文、牛敏哲、韩建华、徐航、徐春景、梁晓丹。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.01788)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVE_Prompt-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMulti_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAutonomous_Driving-green)\n\n- **[20] 视觉-语言预训练模型的双模态提示调优，** TMM 2023。\n  \n  *邢英辉、吴琪瑞、程德、张世洲、梁国强等。*\n\n  [[论文](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10171397\u002F)][[代码](https:\u002F\u002Fgithub.com\u002Ffanrena\u002FDPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green)\n\n- **[21] 通过提示实现任何内容的分词，** Arxiv 2023。\n  \n  *潘婷、唐露露、王新龙、单世光。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.09128.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fbaaivision\u002Ftokenize-anything)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDense_Prediction-green)\n\n- **[22] MmAP：跨领域多任务学习的多模态对齐提示，** AAAI 2024。\n  \n  *易欣、杜俊龙、王强、严科、丁守红。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.08636)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMmAP-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Prompt-orange)\n\n- **[23] 多样性感知的元视觉提示，** CVPR 2023。\n  \n  *黄启东、董晓义、陈冬冬、张伟明、王菲菲、华刚、于能海。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08138)][[代码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08138)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDAM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPixel_Level-orange)\n\n- **[24] 跨模态提示：将大型预训练模型适配到音视频下游任务，** NeurIPS 2023。\n  \n  *段浩毅、夏燕、周明泽、唐莉、朱继明、赵周。*\n\n  [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Ffile\u002Faf01716e08073368a7c8a62be46dba17-Paper-Conference.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fhaoyi-duan\u002FDG-SCT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDG-SCT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAudio-visual_Understanding-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMultiModal_Prompt-orange)\n  \n- **[25] Point-PEFT：3D预训练模型的参数高效微调，** AAAI 2024。\n\n  *唐艺文、张雷、郭佐怡、马贤正、王东、王志刚、赵斌、李雪龙。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03059)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_PEFT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPoint_Cloud-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[26] E2VPT：一种有效且高效的视觉提示调优方法，** ICCV 2023。\n  \n  *程汉、王奇凡、崔一鸣、曹志文、王文冠、戚思源、刘东方*\n  \n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.13770)][[代码](https:\u002F\u002Fgithub.com\u002FChengHan111\u002FE2VPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FE2VPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEmbedding_Level-orange)\n\n- **[27] DGL：用于文本-视频检索的动态全局-局部提示调优，** AAAI 2024。\n  \n  *杨向鹏、朱林超、王晓涵、杨毅*\n  \n  [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10588) [[代码]](https:\u002F\u002Fgithub.com\u002Fknightyxp\u002FDGL) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDGL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FText_Video_Retrieval-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGlobal_Local_Prompt_-orange)\n\n### 前缀调优\n- **[1] 前缀调优：优化连续提示以用于生成任务，** ACL 2021。\n  \n  *李翔 Lisa 和 梁 Percy。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.00190)][[代码](https:\u002F\u002Fgithub.com\u002FXiangLi1999\u002FPrefixTuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPrefix_Tuning-blue)\n\n- **[2] 向视觉参数高效迁移学习的统一视角迈进，** Arxiv 2023。\n  \n  *Yu, Bruce XB、Chang, Jianlong、Liu, Lingbo、Tian, Qi 和 Chen, Chang Wen。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00788)][[代码](https:\u002F\u002Fgithub.com\u002Fbruceyo\u002FV-PETL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FV_PETL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[3] 探索视觉Transformer的高效少样本适应方法，** TMLR 2023。\n  \n  *Xu, Chengming、Yang, Siqian、Wang, Yabiao、Wang, Zhanxiong、Fu, Yanwei 和 Xue, Xiangyang。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.02419.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fchmxu\u002FeTT_TMLR2022)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FeTT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFew_Shot_Learning-green)\n\n- **[4] 视觉查询调优：迈向有效利用中间表示以实现参数与内存高效的迁移学习，** CVPR 2023。\n  \n  *Tu, Cheng-Hao、Mai, Zheda 和 Chao, Wei-Lun。*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FTu_Visual_Query_Tuning_Towards_Effective_Usage_of_Intermediate_Representations_for_CVPR_2023_paper.html)][[代码](https:\u002F\u002Fgithub.com\u002Fandytu28\u002FVQT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVQT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green)\n\n- **[5] 具有通用参数高效调优的统一持续学习框架，** ICCV 2023。\n  \n  *Tu, Cheng-Hao、Mai, Zheda 和 Chao, Wei-Lun。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.10070.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fgqk\u002FLAE)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContinua_lLearning-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n### 基于部分的微调\n\n### 规范调优\n- **[1] 更好的 ImageNet 模型迁移效果更好吗？** CVPR 2019。\n\n  *Kornblith、Simon 和 Shlens、Jonathon 以及 Le、Quoc V.*\n\n  [[论文](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fhtml\u002FKornblith_Do_Better_ImageNet_Models_Transfer_Better_CVPR_2019_paper.html)][[代码](https:\u002F\u002Fgithub.com\u002Flsh3163\u002FImagenet-Better)]\n\n- **[2] BitFit：基于 Transformer 的掩码语言模型的简单参数高效微调。** ACL 2022。\n\n  *Zaken、Elad Ben 和 Ravfogel、Shauli 以及 Goldberg、Yoav.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.10199.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fbenzakenelad\u002FBitFit)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBitFit-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBias_Tuning-orange)\n\n- **[3] 基础模型的差分隐私偏置项仅微调，** Arxiv 2022。\n  \n  *Bu、Zhiqi 和 Wang、Yu-Xiang 以及 Zha、Sheng 和 Karypis、George.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00036)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDP_BiTFiT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBias_Tuning-orange)\n  \n- **[4] AdapterBias：用于 NLP 任务中适配器的参数高效、与标记相关的表征迁移，** NAACL 2022。\n  \n  *Fu、Chin-Lun 和 Chen、Zih-Ching 以及 Lee、Yun-Ru 以及 Lee、Hung-yi.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.00305)][[代码](https:\u002F\u002Fgithub.com\u002FAllen0307\u002FAdapterBias)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FAdapterBias-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLayerNorm_Tuning-orange)\n  \n- **[5] 参数高效的少样本微调的强大基线，** AAAI 2024。\n  \n  *Basu、Samyadeep 和 Massiceti、Daniela 以及 Hu、Shell Xu 以及 Feizi、Soheil.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01917)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLN_TUNE-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLayerNorm_Tuning-orange)\n\n- **[6] DiffFit：通过简单的参数高效微调解锁大型扩散模型的可迁移性，** ICCV 2023。\n  \n  *Enze Xie、Lewei Yao、Han Shi、Zhili Liu、Daquan Zhou、Zhaoqiang Liu、Jiawei Li、Zhenguo Li.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06648)][[代码](https:\u002F\u002Fgithub.com\u002Fmkshing\u002FDiffFit-pytorch)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffFit-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGenerate_Task-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBias_Tuning-orange)\n\n- **[7] 基于梯度的参数选择以实现高效微调，** Arxiv 2023。\n  \n  *Zhi Zhang、Qizhe Zhang、Zijun Gao、Renrui Zhang、Ekaterina Shutova、Shiji Zhou、Shanghang Zhang.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.10136)][[代码]()] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGPS-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImportance_Parameter_Tuning-orange)\n\n- **[8] 基于敏感性的视觉参数高效微调，** ICCV 2023。\n  \n  *Haoyu He、Jianfei Cai、Jing Zhang、Dacheng Tao、Bohan Zhuang.*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08566)][[代码](https:\u002F\u002Fgithub.com\u002Fziplab\u002FSPT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSPT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Classification-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImportance_Parameter_Tuning-orange)\n\n- **[9] 基于梯度的参数选择以实现高效微调，** CVPR 2024。\n  \n  *Zhi Zhang、Qizhe Zhang、Zijun Gao、Renrui Zhang、Ekaterina Shutova、Shiji Zhou、Shanghang Zhang.*\n\n### 参数高效微调\n- **[1] LoRA：大语言模型的低秩适应。** NeurIPS 2021。\n\n  *Hu, Edward J 和 Shen, Yelong 和 Wallis, Phillip 和 Allen-Zhu, Zeyuan 和 Li, Yuanzhi 等。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09685.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLoRA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLoRA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[2] 缩放与偏移你的特征：一种高效的模型微调新基线，** NeurIPS 2022。\n  \n  *Dongze Lian、Daquan Zhou、Jiashi Feng、Xinchao Wang。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.08823)][[代码](https:\u002F\u002Fgithub.com\u002Fdongzelian\u002FSSF)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSSF-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMLP微调-orange)\n\n- **[3] KronA：基于克罗内克适配器的参数高效微调，** Arxiv 2023。\n  \n  *Ali Edalati、Marzieh Tahaei、Ivan Kobyzev、Vahid Partovi Nia、James J. Clark、Mehdi Rezagholizadeh。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10650)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FKronA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[4] FacT：视觉Transformer上的轻量级自适应因子微调，** AAAI 2023。\n  \n  *Jie, Shibo 和 Deng, Zhi-Hong。*\n\n  [[论文](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F25187)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFacT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F张量分解-orange)\n\n- **[5] 聚合、分解与微调：一种简单而有效的视觉Transformer因子微调方法，** Arxiv 2023。\n  \n  *Chen, Dongping。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.06749)][[代码](https:\u002F\u002Fgithub.com\u002FDongping-Chen\u002FEFFT-EFfective-Factor-Tuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FEFFT-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F张量分解-orange)\n\n- **[6] 参数高效少样本微调的强大基线，** AAAI 2024。\n  \n  *Basu, Samyadeep 和 Massiceti, Daniela 和 Hu, Shell Xu 和 Feizi, Soheil。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01917)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FATTNSCALE-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[7] 视觉Transformer的参数高效模型适配，** AAAI 2023。\n  \n  *He, Xuehai 和 Li, Chunyuan 和 Zhang, Pengchuan 和 Yang, Jianwei 和 Wang, Xin Eric。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.16329)][[代码](https:\u002F\u002Fgithub.com\u002Feric-ai-lab\u002FPEViT)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FKAdaptation-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[8] DnA：通过低秩分解与对齐提升少样本迁移学习，** ECCV 2022。\n  \n  *Jiang, Ziyu 和 Chen, Tianlong 和 Chen, Xuxi 和 Cheng, Yu 和 Zhou, Luowei 和 Yuan, Lu 等。*\n\n  [[论文](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-20044-1_14)][[代码](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FDnA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDnA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[9] 通过结构化重参数化实现高效视觉适配，** Arxiv 2023。\n  \n  *Luo, Gen 和 Huang, Minglang 和 Zhou, Yiyi 和 Sun, Xiaoshuai 和 Jiang, Guannan 和 Wang, Zhiyu 和 Ji, Rongrong。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.08106)][[代码](https:\u002F\u002Fgithub.com\u002Fluogen1996\u002FRepAdapter)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRepAdapter-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F图像分类-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F适配器重参数-orange)\n\n- **[10]SAM-PARSER：通过参数空间重构高效微调SAM，** AAAI 2024。\n  \n  *Zelin Peng、Zhengqin Xu、Zhilin Zeng、Xiaokang Yang、Wei Shen。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14604)][[代码]()！](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSAM_PARSER-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F密集预测-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[10]DiffuseKronA：个性化扩散模型的参数高效微调方法，** Arxiv 2023。\n  \n  *Shyam Marjit、Harshit Singh、Nityanand Mathur、Sayak Paul、Chia-Mu Yu、Pin-Yu Chen。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.17412)][[代码](https:\u002F\u002Fgithub.com\u002FIBM\u002FDiffuseKronA)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffuseKronA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F扩散模型-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n- **[11] 扩展稀疏微调以降低内存占用，** NeurIPS 2024。\n\n  *Shufan Shen、Junshu Sun、Xiangyang Ji、Qingming Huang、Shuhui Wang。*\n  \n  [[论文](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F8c420176b45e923cf99dee1d7356a763-Paper-Conference.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fssfgunner\u002FSNELL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiffuseKronA-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F扩散模型-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F权重微调-orange)\n\n - **[12]PointLoRA：面向点云学习的基于标记选择的低秩适应，** CVPR 2025。\n\n   *Song Wang、Xiaolu Liu、Lingdong Kong、Jianyun Xu、Chunyong Hu、Gongfan Fang、Wentong Li、Jianke Zhu、Xinchao Wang。*\n   \n   [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16023)][[代码](https:\u002F\u002Fgithub.com\u002Fsongw-zju\u002FPointLoRA)]\n\n### 统一调优\n- **[1] 朝向参数高效迁移学习的统一视角，** ICLR 2022。\n\n  *Junxian He、Chunting Zhou、Xuezhe Ma、Taylor Berg-Kirkpatrick、Graham Neubig。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.04366)][[代码](https:\u002F\u002Fgithub.com\u002Fjxhe\u002Funify-parameter-efficient-tuning)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[2] 朝向视觉领域参数高效迁移学习的统一视角，** Arxiv 2023。\n\n  *Yu, Bruce XB、Chang, Jianlong、Liu, Lingbo、Tian, Qi、Chen, Chang Wen。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00788)][[代码](https:\u002F\u002Fgithub.com\u002Fbruceyo\u002FV-PETL)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FV_PETL-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[3] 神经提示搜索，** Arxiv 2022。\n  \n  *Zhang, Yuanhan、Zhou, Kaiyang、Liu, Ziwei。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04673)][[代码](https:\u002F\u002Fgithub.com\u002FDavidzhangyuanhan\u002FNOAH)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNOAH-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n- **[4] 从统一视角重新思考高效调优方法，** Arxiv 2023。\n  \n  *Jiang, Zeyinzi、Mao, Chaojie、Huang, Ziyuan、Lv, Yiliang、Zhao, Deli、Zhou, Jingren。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.00690.pdf)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FU_Tuning-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FUnified_View-orange)\n\n- **[5] 具有通用参数高效调优的统一持续学习框架，** ICCV 2023。\n  \n  *Gao, Qiankun、Zhao, Chen、Sun, Yifan、Xi, Teng、Zhang, Gang、Ghanem, Bernard、Zhang, Jian。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10070)][[代码](https:\u002F\u002Fgithub.com\u002Fgqk\u002FLAE)] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLAM-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContinual_Learning-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n- **[6] GIST：通过知识交互改进参数高效微调，** Arxiv 2023。\n\n  *Jiacheng Ruan、Jingsheng Gao、Mingye Xie、Suncheng Xiang、Zefang Yu、Ting Liu、Yuzhuo Fu。*\n\n  [[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.07255.pdf)][代码] ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGIST-blue) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FImage_Recognition-green) ![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FFramework-orange)\n\n## 🎯 \u003Cspan id=\"head1\"> *视觉 PETL 数据集* \u003C\u002Fspan>\n| 名称 | 论文 | 链接 | 备注 |\n|:-----|:-----:|:----:|:-----:|\n| **FGVC** | [视觉提示调优](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.12119) | [链接](https:\u002F\u002Fcornell.app.box.com\u002Fv\u002Fvptfgvcsplits) | FGVC 包含 5 个基准细粒度视觉分类任务。 |\n| **VTAB-1k** | [基于视觉任务适应基准的大规模表征学习研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.04867) | [链接](https:\u002F\u002Fcornell.app.box.com\u002Fv\u002Fvptfgvcsplits) | VTAB-1k 包含 19 个多样化的视觉分类任务。|\n| **Kinetics-400** | [Kinetics 人类动作视频数据集。](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.06950) | [链接](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F11US3KptpqHsZ5K4wQLzs-OA3Y50OWtPJ\u002Fview?usp=sharing) | 视频动作识别|\n| **SSv2** | [“something something” 视频数据库，用于学习和评估视觉常识](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04261) | [链接](https:\u002F\u002Fdeveloper.qualcomm.com\u002Fsoftware\u002Fai-datasets\u002Fsomething-something) | 视频动作识别|\n| **HMDB51** | [HMDB：大型人体运动数据库](http:\u002F\u002Fcbcl.mit.edu\u002Fpublications\u002Fps\u002FKuehne_etal_iccv11.pdf) | [链接](http:\u002F\u002Fserre-lab.clps.brown.edu\u002Fresource\u002Fhmdb-a-large-human-motion-database\u002F) | 视频动作识别|\n| **Diving-48** | [RESOUND：迈向无表征偏见的动作识别](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FYingwei_Li_RESOUND_Towards_Action_ECCV_2018_paper.pdf) | [链接](http:\u002F\u002Fwww.svcl.ucsd.edu\u002Fprojects\u002Fresound\u002Fdataset.html) | 视频动作识别|\n| **UCF-101** | [UCF101：来自野外视频的 101 类人类动作数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F1212.0402) | [链接](https:\u002F\u002Fwww.crcv.ucf.edu\u002Fdata\u002FUCF101.php) | 视频动作识别|\n| **MSCOCO** | [微软 COCO：上下文中的常见物体](https:\u002F\u002Farxiv.org\u002Fabs\u002F1405.0312) | [链接](http:\u002F\u002Fcocodataset.org\u002F) | 实例分割|\n| **ADE20K** | [通过 ADE20K 数据集实现场景语义理解](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.05442) | [链接](http:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002Fdatasets\u002FADE20K\u002F) | 语义分割|\n| **PASCALVOC** | [帕斯卡视觉目标类别挑战赛：回顾](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FThe-Pascal-Visual-Object-Classes-Challenge%3A-A-Everingham-Eslami\u002F616b246e332573af1f4859aa91440280774c183a) | [链接](https:\u002F\u002Fhost.robots.ox.ac.uk\u002Fpascal\u002FVOC\u002Fvoc2012\u002F) | 语义分割|\n\n## 🧒 \u003Cspan id=\"head1\"> *贡献* \u003C\u002Fspan>\n\n\u003C!-- 复制粘贴到你的 Readme.md 文件中 -->\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsynbol_Awesome-Parameter-Efficient-Transfer-Learning_readme_f1d38ecfbda0.png\" \u002F>\n\u003C\u002Fa>\n\n### :clap: 感谢以上贡献者们的杰出工作！\n\n## ⭐ \u003Cspan id=\"head1\"> *引用* \u003C\u002Fspan>\n\n如果您觉得我们的综述和仓库对您的研究有所帮助，请在下方引用：\n\n```bibtex\n\n@article{xin2024parameter,\n  title={预训练视觉模型的参数高效微调：综述},\n  author={Xin, Yi、Luo, Siqi、Zhou, Haodi、Du, Junlong、Liu, Xiaohong、Fan, Yue、Li, Qing、Du, Yuntao},\n  journal={arXiv 预印本 arXiv:2402.02242},\n  year={2024}\n}\n\n```","# Awesome-Parameter-Efficient-Transfer-Learning 快速上手指南\n\n本项目并非一个单一的 Python 库，而是一个**精选资源列表**，汇集了参数高效迁移学习（PEFT）领域的论文、代码实现和技术方法（如 Adapter、Prompt Tuning 等）。要使用其中的技术，通常需要结合具体的实现库（如 Hugging Face `peft`）或直接运行列表中特定论文的开源代码。\n\n以下是基于该领域主流实践的快速上手流程。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐), macOS, 或 Windows (WSL2)\n*   **Python 版本**: >= 3.8\n*   **硬件要求**: 建议使用 NVIDIA GPU (CUDA 支持) 以加速训练和推理；CPU 亦可运行但速度较慢。\n*   **前置依赖**:\n    *   `git`: 用于克隆仓库和代码\n    *   `pip` 或 `conda`: 包管理工具\n    *   `PyTorch`: 深度学习框架\n\n> **国内加速建议**:\n> 推荐使用清华源或阿里源安装 Python 依赖，以提升下载速度。\n> ```bash\n> pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n由于本项目是资源列表，你可以根据需求选择以下两种方式进行“安装”：\n\n### 方式一：克隆资源列表（查阅论文与代码链接）\n如果你需要浏览最新的论文列表和对应的 GitHub 仓库链接：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fsynbol\u002FAwesome-Parameter-Efficient-Transfer-Learning.git\ncd Awesome-Parameter-Efficient-Transfer-Learning\n```\n\n### 方式二：安装主流 PEFT 实现库（推荐用于实际开发）\n列表中大多数方法（如 LoRA, Adapter, Prefix Tuning）已集成在 Hugging Face 的 `peft` 库中。这是最快捷的开始方式：\n\n```bash\n# 使用 pip 安装\npip install peft accelerate transformers torch\n\n# 或者使用 conda\nconda install -c huggingface peft accelerate transformers pytorch torchvision torchaudio cudatoolkit=11.8\n```\n\n## 基本使用\n\n以下示例展示如何使用 `peft` 库（对应列表中 **Adapter Tuning** 或 **LoRA** 等方法）对预训练模型进行高效微调。\n\n### 1. 加载模型并应用 PEFT 配置\n以最常用的 **LoRA** (Low-Rank Adaptation) 为例，它属于列表中 \"Addition-based Tuning\" 的高效变体：\n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nfrom peft import LoraConfig, get_peft_model, TaskType\n\n# 1. 加载预训练模型和分词器\nmodel_name = \"bigscience\u002Fbloomz-560m\"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nbase_model = AutoModelForCausalLM.from_pretrained(model_name)\n\n# 2. 配置 PEFT 参数 (LoRA)\npeft_config = LoraConfig(\n    task_type=TaskType.CAUSAL_LM, \n    inference_mode=False, \n    r=8, # 低秩矩阵的秩\n    lora_alpha=32, \n    lora_dropout=0.1,\n    target_modules=[\"query_key_value\"] # 针对 Bloom 模型的特定层\n)\n\n# 3. 将 PEFT 配置应用到模型\nmodel = get_peft_model(base_model, peft_config)\nmodel.print_trainable_parameters()\n# 输出示例：trainable params: 0.19% || all params: 559M || trainable%: 0.19\n```\n\n### 2. 训练与推理\n配置完成后，像普通 PyTorch 模型一样进行训练。训练结束后，仅保存微小的适配器权重：\n\n```python\n# 训练循环 (伪代码)\n# for batch in dataloader:\n#     outputs = model(**batch)\n#     loss = outputs.loss\n#     loss.backward()\n#     optimizer.step()\n\n# 保存 PEFT 权重 (仅几 MB)\nmodel.save_pretrained(\".\u002Fmy_lora_adapter\")\n\n# 加载适配器进行推理\nfrom peft import PeftModel\n\ninference_model = PeftModel.from_pretrained(base_model, \".\u002Fmy_lora_adapter\")\ninputs = tokenizer(\"Hello, how are you?\", return_tensors=\"pt\")\noutputs = inference_model.generate(**inputs)\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\n```\n\n### 3. 探索更多方法\n访问克隆后的本地目录或在线仓库，查看 `README.md` 中的 **Papers** 章节，找到你感兴趣的特定方法（如 **Prompt Tuning**, **Side Tuning** 等），点击对应的代码链接获取该特定算法的独立实现脚本。","某初创医疗科技公司的算法团队，正试图将通用的大型语言模型微调为专业的“临床病历辅助生成助手”，但面临算力预算紧张和研发周期短的双重压力。\n\n### 没有 Awesome-Parameter-Efficient-Transfer-Learning 时\n- **算力成本高昂**：团队不得不采用全量参数微调，需要租用多张昂贵的 A100 显卡，导致单次实验成本超出预算 50%。\n- **技术选型迷茫**：面对 Adapter、LoRA、Prefix Tuning 等数十种新兴技术，工程师花费两周时间漫无目的地搜索论文和代码库，难以确定最适合医疗文本的方案。\n- **复现难度极大**：找到的开源代码风格各异、依赖冲突严重，且缺乏针对特定任务的配置参考，导致环境搭建和调试耗时极长。\n- **迭代速度缓慢**：由于训练资源受限且试错成本高，模型一天只能进行一轮验证，严重拖慢了产品上线进度。\n\n### 使用 Awesome-Parameter-Efficient-Transfer-Learning 后\n- **资源消耗骤降**：通过仓库推荐的 LoRA 和 Adapter 方案，团队仅用单张消费级显卡即可完成微调，显存占用减少 80%，直接节省了数万元云租赁费用。\n- **决策路径清晰**：利用仓库分类清晰的\"Addition-based Tuning\"目录和精选论文列表，团队在半天内就锁定了适合少样本医疗数据的最佳算法组合。\n- **落地效率倍增**：直接引用仓库中经过验证的高质量实现代码和基准配置，避免了重复造轮子，将原本需要两天的环境部署压缩至两小时。\n- **快速迭代优化**：低成本使得团队可以并行尝试多种参数策略，每天完成十轮以上实验，迅速提升了模型在专业术语理解上的准确率。\n\nAwesome-Parameter-Efficient-Transfer-Learning 通过整合前沿资源与最佳实践，让中小团队也能以极低的成本高效驾驭大模型微调技术。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsynbol_Awesome-Parameter-Efficient-Transfer-Learning_8248540d.png","synbol","SII-SynBol (辛毅)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsynbol_63f953ac.png","SII is an institution dedicated to innovation in education and research in the field of AI.","Nanjing University","China",null,"https:\u002F\u002Fsynbol.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fsynbol",591,18,"2026-04-03T05:51:06",1,"","未说明",{"notes":88,"python":86,"dependencies":89},"该项目是一个资源列表（Awesome List），用于收集参数高效迁移学习（Parameter-Efficient Transfer Learning）相关的论文和资源，本身不是一个可执行的软件工具或代码库，因此没有具体的运行环境、依赖库或硬件需求。用户需根据列表中引用的具体论文或子项目去查询相应的环境要求。",[],[14,35,15],[92,93,94,95,96,97,98,99],"awesome-list","computer-vision","parameter-efficient-fine-tuning","transfer-learning","survey","deep-learning","pre-trained-models","transformer","2026-03-27T02:49:30.150509","2026-04-08T19:02:47.803509",[],[]]