[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-xmu-xiaoma666--External-Attention-pytorch":3,"tool-xmu-xiaoma666--External-Attention-pytorch":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",148568,2,"2026-04-09T23:34:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":90,"env_os":91,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":98,"github_topics":99,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":108,"updated_at":109,"faqs":110,"releases":150},6152,"xmu-xiaoma666\u002FExternal-Attention-pytorch","External-Attention-pytorch","🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐","External-Attention-pytorch 是一个基于 PyTorch 构建的开源代码库，旨在汇集并复现各类主流的注意力机制（Attention）、多层感知机（MLP）、重参数化（Re-parameter）及卷积模块。它的核心目标是帮助开发者和研究人员从代码层面深入理解前沿论文的核心思想。\n\n在阅读学术论文时，许多研究者常面临一个痛点：论文理论看似简洁，但官方源码往往将核心模块深度耦合在复杂的分类、检测或分割框架中，导致难以剥离出关键代码进行学习和复用。External-Attention-pytorch 有效解决了这一难题，它将各种算法模块封装为独立、语义清晰且即插即用的“乐高积木”。用户无需反复造轮子或深陷冗余的工程代码，只需几行调用即可在实验中验证不同机制的效果。\n\n该工具特别适合深度学习初学者、高校科研人员以及算法工程师使用。对于新手，它是拆解论文原理、掌握核心代码实现的绝佳教程；对于进阶研究者，它提供了丰富的基础组件，便于快速搭建和迭代创新模型。项目不仅涵盖了 SE、CBAM、SK 等经典注意力机制，还持续更新如 MobileViTv2 等最新架构，支持通过 pip","External-Attention-pytorch 是一个基于 PyTorch 构建的开源代码库，旨在汇集并复现各类主流的注意力机制（Attention）、多层感知机（MLP）、重参数化（Re-parameter）及卷积模块。它的核心目标是帮助开发者和研究人员从代码层面深入理解前沿论文的核心思想。\n\n在阅读学术论文时，许多研究者常面临一个痛点：论文理论看似简洁，但官方源码往往将核心模块深度耦合在复杂的分类、检测或分割框架中，导致难以剥离出关键代码进行学习和复用。External-Attention-pytorch 有效解决了这一难题，它将各种算法模块封装为独立、语义清晰且即插即用的“乐高积木”。用户无需反复造轮子或深陷冗余的工程代码，只需几行调用即可在实验中验证不同机制的效果。\n\n该工具特别适合深度学习初学者、高校科研人员以及算法工程师使用。对于新手，它是拆解论文原理、掌握核心代码实现的绝佳教程；对于进阶研究者，它提供了丰富的基础组件，便于快速搭建和迭代创新模型。项目不仅涵盖了 SE、CBAM、SK 等经典注意力机制，还持续更新如 MobileViTv2 等最新架构，支持通过 pip 直接安装或源码引用，极大地降低了科研门槛并提升了实验效率。","\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_2b0c5abc6cfa.gif\" height=\"200\" width=\"400\"\u002F>\n\n简体中文 | [English](.\u002FREADME_EN.md)\n\n# FightingCV 代码库， 包含 [***Attention***](#attention-series),[***Backbone***](#backbone-series), [***MLP***](#mlp-series), [***Re-parameter***](#re-parameter-series), [**Convolution**](#convolution-series)\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Ffightingcv-v0.0.1-brightgreen)\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython->=v3.0-blue)\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpytorch->=v1.4-red)\n\n\u003C!--\n-------\n*If this project is helpful to you, welcome to give a ***star***.* \n\n*Don't forget to ***follow*** me to learn about project updates.*\n\n-->\n\n\n\u003C!--\n\n\nHello，大家好，我是小马🚀🚀🚀\n\n***For 小白（Like Me）：***\n最近在读论文的时候会发现一个问题，有时候论文核心思想非常简单，核心代码可能也就十几行。但是打开作者release的源码时，却发现提出的模块嵌入到分类、检测、分割等任务框架中，导致代码比较冗余，对于特定任务框架不熟悉的我，**很难找到核心代码**，导致在论文和网络思想的理解上会有一定困难。\n\n***For 进阶者（Like You）：***\n如果把Conv、FC、RNN这些基本单元看做小的Lego积木，把Transformer、ResNet这些结构看成已经搭好的Lego城堡。那么本项目提供的模块就是一个个具有完整语义信息的Lego组件。**让科研工作者们避免反复造轮子**，只需思考如何利用这些“Lego组件”，搭建出更多绚烂多彩的作品。\n\n***For 大神（May Be Like You）：***\n能力有限，**不喜轻喷**！！！\n\n***For All：***\n本项目致力于实现一个既能**让深度学习小白也能搞懂**，又能**服务科研和工业社区**的代码库。\n\n-->\n\n\u003C!--\n\n作为[**FightingCV公众号**](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fm9RiivbbDPdjABsTd6q8FA)和 **[FightingCV-Paper-Reading](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FFightingCV-Paper-Reading)** 的补充，本项目的宗旨是从代码角度，实现🚀**让世界上没有难读的论文**🚀。\n\n\n（同时也非常欢迎各位科研工作者将自己的工作的核心代码整理到本项目中，推动科研社区的发展，会在readme中注明代码的作者~）\n\n\n\n\n## 技术交流 \u003Cimg title=\"\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ba39e2a5e577.png\" alt=\"\" width=\"20\">\n\n欢迎大家关注公众号：**FightingCV**\n\n\n\n| FightingCV公众号 | 小助手微信 （备注【**公司\u002F学校+方向+ID**】）|\n:-------------------------:|:-------------------------:\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_1d2f301c90fb.jpg' width='200px'>  |  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b6bb214afce7.jpg' width='200px'> \n\n- 公众号**每天**都会进行**论文、算法和代码的干货分享**哦~\n\n- **交流群每天分享一些最新的论文和解析**，欢迎大家一起**学习交流**哈~~~\n\n- 强烈推荐大家关注[**知乎**](https:\u002F\u002Fwww.zhihu.com\u002Fpeople\u002Fjason-14-58-38\u002Fposts)账号和[**FightingCV公众号**](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fm9RiivbbDPdjABsTd6q8FA)，可以快速了解到最新优质的干货资源。\n\n\n-------\n\n\n-->\n\n## 🌟 Star History\n\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_8918d6f0a640.png)](https:\u002F\u002Fstar-history.com\u002F#xmu-xiaoma666\u002FExternal-Attention-pytorch&Date)\n\n## 使用\n\n### 安装\n\n 直接通过 pip 安装\n\n  ```shell\n  pip install fightingcv-attention\n  ```\n\n\n或克隆该仓库\n\n  ```shell\n  git clone https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch.git\n\n  cd External-Attention-pytorch\n  ```\n\n### 演示\n\n#### 使用 pip 方式\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n# 使用 pip 方式\n\nfrom fightingcv_attention.attention.MobileViTv2Attention import *\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    sa = MobileViTv2Attention(d_model=512)\n    output=sa(input)\n    print(output.shape)\n```\n\n - pip包 内置模块使用参考: [fightingcv-attention 说明文档](.\u002FREADME_pip.md)\n\n#### 使用 git 方式\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n# 与 pip方式 区别在于 将 `fightingcv_attention` 替换 `model`\n\nfrom model.attention.MobileViTv2Attention import *\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    sa = MobileViTv2Attention(d_model=512)\n    output=sa(input)\n    print(output.shape)\n```\n\n-------\n\n\n\n# 目录\n\n- [Attention Series](#attention-series)\n    - [1. External Attention Usage](#1-external-attention-usage)\n\n    - [2. Self Attention Usage](#2-self-attention-usage)\n\n    - [3. Simplified Self Attention Usage](#3-simplified-self-attention-usage)\n\n    - [4. Squeeze-and-Excitation Attention Usage](#4-squeeze-and-excitation-attention-usage)\n\n    - [5. SK Attention Usage](#5-sk-attention-usage)\n\n    - [6. CBAM Attention Usage](#6-cbam-attention-usage)\n\n    - [7. BAM Attention Usage](#7-bam-attention-usage)\n    \n    - [8. ECA Attention Usage](#8-eca-attention-usage)\n\n    - [9. DANet Attention Usage](#9-danet-attention-usage)\n\n    - [10. Pyramid Split Attention (PSA) Usage](#10-Pyramid-Split-Attention-Usage)\n\n    - [11. Efficient Multi-Head Self-Attention(EMSA) Usage](#11-Efficient-Multi-Head-Self-Attention-Usage)\n\n    - [12. Shuffle Attention Usage](#12-Shuffle-Attention-Usage)\n    \n    - [13. MUSE Attention Usage](#13-MUSE-Attention-Usage)\n  \n    - [14. SGE Attention Usage](#14-SGE-Attention-Usage)\n\n    - [15. A2 Attention Usage](#15-A2-Attention-Usage)\n\n    - [16. AFT Attention Usage](#16-AFT-Attention-Usage)\n\n    - [17. Outlook Attention Usage](#17-Outlook-Attention-Usage)\n\n    - [18. ViP Attention Usage](#18-ViP-Attention-Usage)\n\n    - [19. CoAtNet Attention Usage](#19-CoAtNet-Attention-Usage)\n\n    - [20. HaloNet Attention Usage](#20-HaloNet-Attention-Usage)\n\n    - [21. Polarized Self-Attention Usage](#21-Polarized-Self-Attention-Usage)\n\n    - [22. CoTAttention Usage](#22-CoTAttention-Usage)\n\n    - [23. Residual Attention Usage](#23-Residual-Attention-Usage)\n  \n    - [24. S2 Attention Usage](#24-S2-Attention-Usage)\n\n    - [25. GFNet Attention Usage](#25-GFNet-Attention-Usage)\n\n    - [26. Triplet Attention Usage](#26-TripletAttention-Usage)\n\n    - [27. Coordinate Attention Usage](#27-Coordinate-Attention-Usage)\n\n    - [28. MobileViT Attention Usage](#28-MobileViT-Attention-Usage)\n\n    - [29. ParNet Attention Usage](#29-ParNet-Attention-Usage)\n\n    - [30. UFO Attention Usage](#30-UFO-Attention-Usage)\n\n    - [31. ACmix Attention Usage](#31-Acmix-Attention-Usage)\n  \n    - [32. MobileViTv2 Attention Usage](#32-MobileViTv2-Attention-Usage)\n\n    - [33. DAT Attention Usage](#33-DAT-Attention-Usage)\n\n    - [34. CrossFormer Attention Usage](#34-CrossFormer-Attention-Usage)\n\n    - [35. MOATransformer Attention Usage](#35-MOATransformer-Attention-Usage)\n\n    - [36. CrissCrossAttention Attention Usage](#36-CrissCrossAttention-Attention-Usage)\n\n    - [37. Axial_attention Attention Usage](#37-Axial_attention-Attention-Usage)\n\n- [Backbone Series](#Backbone-series)\n\n    - [1. ResNet Usage](#1-ResNet-Usage)\n\n    - [2. ResNeXt Usage](#2-ResNeXt-Usage)\n\n    - [3. MobileViT Usage](#3-MobileViT-Usage)\n\n    - [4. ConvMixer Usage](#4-ConvMixer-Usage)\n\n    - [5. ShuffleTransformer Usage](#5-ShuffleTransformer-Usage)\n\n    - [6. ConTNet Usage](#6-ConTNet-Usage)\n\n    - [7. HATNet Usage](#7-HATNet-Usage)\n\n    - [8. CoaT Usage](#8-CoaT-Usage)\n\n    - [9. PVT Usage](#9-PVT-Usage)\n\n    - [10. CPVT Usage](#10-CPVT-Usage)\n\n    - [11. PIT Usage](#11-PIT-Usage)\n\n    - [12. CrossViT Usage](#12-CrossViT-Usage)\n\n    - [13. TnT Usage](#13-TnT-Usage)\n\n    - [14. DViT Usage](#14-DViT-Usage)\n\n    - [15. CeiT Usage](#15-CeiT-Usage)\n\n    - [16. ConViT Usage](#16-ConViT-Usage)\n\n    - [17. CaiT Usage](#17-CaiT-Usage)\n\n    - [18. PatchConvnet Usage](#18-PatchConvnet-Usage)\n\n    - [19. DeiT Usage](#19-DeiT-Usage)\n\n    - [20. LeViT Usage](#20-LeViT-Usage)\n\n    - [21. VOLO Usage](#21-VOLO-Usage)\n    \n    - [22. Container Usage](#22-Container-Usage)\n\n    - [23. CMT Usage](#23-CMT-Usage)\n\n    - [24. EfficientFormer Usage](#24-EfficientFormer-Usage)\n\n    - [25. ConvNeXtV2 Usage](#25-ConvNeXtV2-Usage)\n\n\n\n- [MLP Series](#mlp-series)\n\n    - [1. RepMLP Usage](#1-RepMLP-Usage)\n\n    - [2. MLP-Mixer Usage](#2-MLP-Mixer-Usage)\n\n    - [3. ResMLP Usage](#3-ResMLP-Usage)\n\n    - [4. gMLP Usage](#4-gMLP-Usage)\n\n    - [5. sMLP Usage](#5-sMLP-Usage)\n\n    - [6. vip-mlp Usage](#6-vip-mlp-Usage)\n\n- [Re-Parameter(ReP) Series](#Re-Parameter-series)\n\n    - [1. RepVGG Usage](#1-RepVGG-Usage)\n\n    - [2. ACNet Usage](#2-ACNet-Usage)\n\n    - [3. Diverse Branch Block(DDB) Usage](#3-Diverse-Branch-Block-Usage)\n\n- [Convolution Series](#Convolution-series)\n\n    - [1. Depthwise Separable Convolution Usage](#1-Depthwise-Separable-Convolution-Usage)\n\n    - [2. MBConv Usage](#2-MBConv-Usage)\n\n    - [3. Involution Usage](#3-Involution-Usage)\n\n    - [4. DynamicConv Usage](#4-DynamicConv-Usage)\n\n    - [5. CondConv Usage](#5-CondConv-Usage)\n\n***\n\n\n\n# Attention Series\n\n- Pytorch implementation of [\"Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks---arXiv 2021.05.05\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.02358)\n\n- Pytorch implementation of [\"Attention Is All You Need---NIPS2017\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03762.pdf)\n\n- Pytorch implementation of [\"Squeeze-and-Excitation Networks---CVPR2018\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01507)\n\n- Pytorch implementation of [\"Selective Kernel Networks---CVPR2019\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.06586.pdf)\n\n- Pytorch implementation of [\"CBAM: Convolutional Block Attention Module---ECCV2018\"](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FSanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.pdf)\n\n- Pytorch implementation of [\"BAM: Bottleneck Attention Module---BMCV2018\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06514.pdf)\n\n- Pytorch implementation of [\"ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks---CVPR2020\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.03151.pdf)\n\n- Pytorch implementation of [\"Dual Attention Network for Scene Segmentation---CVPR2019\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.02983.pdf)\n\n- Pytorch implementation of [\"EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network---arXiv 2021.05.30\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14447.pdf)\n\n- Pytorch implementation of [\"ResT: An Efficient Transformer for Visual Recognition---arXiv 2021.05.28\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.13677)\n\n- Pytorch implementation of [\"SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS---ICASSP 2021\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.00240.pdf)\n\n- Pytorch implementation of [\"MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning---arXiv 2019.11.17\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.09483)\n\n- Pytorch implementation of [\"Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks---arXiv 2019.05.23\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.09646.pdf)\n\n- Pytorch implementation of [\"A2-Nets: Double Attention Networks---NIPS2018\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11579.pdf)\n\n\n- Pytorch implementation of [\"An Attention Free Transformer---ICLR2021 (Apple New Work)\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14103v1.pdf)\n\n\n- Pytorch implementation of [VOLO: Vision Outlooker for Visual Recognition---arXiv 2021.06.24\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112) \n  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F385561050)\n\n\n- Pytorch implementation of [Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition---arXiv 2021.06.23](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12368) \n  [【论文解析】](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002F5gonUQgBho_m2O54jyXF_Q)\n\n\n- Pytorch implementation of [CoAtNet: Marrying Convolution and Attention for All Data Sizes---arXiv 2021.06.09](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04803) \n  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F385578588)\n\n\n- Pytorch implementation of [Scaling Local Self-Attention for Parameter Efficient Visual Backbones---CVPR2021 Oral](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.12731.pdf)  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F388598744)\n\n\n\n- Pytorch implementation of [Polarized Self-Attention: Towards High-quality Pixel-wise Regression---arXiv 2021.07.02](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00782)  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F389770482) \n\n\n- Pytorch implementation of [Contextual Transformer Networks for Visual Recognition---arXiv 2021.07.26](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12292)  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F394795481) \n\n\n- Pytorch implementation of [Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02456) \n\n\n- Pytorch implementation of [S²-MLPv2: Improved Spatial-Shift MLP Architecture for Vision---arXiv 2021.08.02](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.01072) [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F397003638) \n\n- Pytorch implementation of [Global Filter Networks for Image Classification---arXiv 2021.07.01](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00645) \n\n- Pytorch implementation of [Rotate to Attend: Convolutional Triplet Attention Module---WACV 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03045) \n\n- Pytorch implementation of [Coordinate Attention for Efficient Mobile Network Design ---CVPR 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n- Pytorch implementation of [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2021.10.05](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02178)\n\n- Pytorch implementation of [Non-deep Networks---ArXiv 2021.10.20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07641)\n\n- Pytorch implementation of [UFO-ViT: High Performance Linear Vision Transformer without Softmax---ArXiv 2021.09.29](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.14382)\n\n- Pytorch implementation of [Separable Self-attention for Mobile Vision Transformers---ArXiv 2022.06.06](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02680)\n\n- Pytorch implementation of [On the Integration of Self-Attention and Convolution---ArXiv 2022.03.14](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14556.pdf)\n\n- Pytorch implementation of [CROSSFORMER: A VERSATILE VISION TRANSFORMER HINGING ON CROSS-SCALE ATTENTION---ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.00154.pdf)\n\n- Pytorch implementation of [Aggregating Global Features into Local Vision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12903)\n\n- Pytorch implementation of [CCNet: Criss-Cross Attention for Semantic Segmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11721)\n\n- Pytorch implementation of [Axial Attention in Multidimensional Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.12180)\n***\n\n\n### 1. External Attention Usage\n#### 1.1. Paper\n[\"Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.02358)\n\n#### 1.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ab18545bd81d.png)\n\n#### 1.3. Usage Code\n```python\nfrom model.attention.ExternalAttention import ExternalAttention\nimport torch\n\ninput=torch.randn(50,49,512)\nea = ExternalAttention(d_model=512,S=8)\noutput=ea(input)\nprint(output.shape)\n```\n\n***\n\n\n### 2. Self Attention Usage\n#### 2.1. Paper\n[\"Attention Is All You Need\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03762.pdf)\n\n#### 1.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_44262dd769da.png)\n\n#### 1.3. Usage Code\n```python\nfrom model.attention.SelfAttention import ScaledDotProductAttention\nimport torch\n\ninput=torch.randn(50,49,512)\nsa = ScaledDotProductAttention(d_model=512, d_k=512, d_v=512, h=8)\noutput=sa(input,input,input)\nprint(output.shape)\n```\n\n***\n\n### 3. Simplified Self Attention Usage\n#### 3.1. Paper\n[None]()\n\n#### 3.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_d0d96ce0a6c7.png)\n\n#### 3.3. Usage Code\n```python\nfrom model.attention.SimplifiedSelfAttention import SimplifiedScaledDotProductAttention\nimport torch\n\ninput=torch.randn(50,49,512)\nssa = SimplifiedScaledDotProductAttention(d_model=512, h=8)\noutput=ssa(input,input,input)\nprint(output.shape)\n\n```\n\n***\n\n### 4. Squeeze-and-Excitation Attention Usage\n#### 4.1. Paper\n[\"Squeeze-and-Excitation Networks\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01507)\n\n#### 4.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_88e209a50982.png)\n\n#### 4.3. Usage Code\n```python\nfrom model.attention.SEAttention import SEAttention\nimport torch\n\ninput=torch.randn(50,512,7,7)\nse = SEAttention(channel=512,reduction=8)\noutput=se(input)\nprint(output.shape)\n\n```\n\n***\n\n### 5. SK Attention Usage\n#### 5.1. Paper\n[\"Selective Kernel Networks\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.06586.pdf)\n\n#### 5.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ceb5b480d397.png)\n\n#### 5.3. Usage Code\n```python\nfrom model.attention.SKAttention import SKAttention\nimport torch\n\ninput=torch.randn(50,512,7,7)\nse = SKAttention(channel=512,reduction=8)\noutput=se(input)\nprint(output.shape)\n\n```\n***\n\n### 6. CBAM Attention Usage\n#### 6.1. Paper\n[\"CBAM: Convolutional Block Attention Module\"](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FSanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.pdf)\n\n#### 6.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_8d5dded2ddca.png)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_330d7cb36174.png)\n\n#### 6.3. Usage Code\n```python\nfrom model.attention.CBAM import CBAMBlock\nimport torch\n\ninput=torch.randn(50,512,7,7)\nkernel_size=input.shape[2]\ncbam = CBAMBlock(channel=512,reduction=16,kernel_size=kernel_size)\noutput=cbam(input)\nprint(output.shape)\n\n```\n\n***\n\n### 7. BAM Attention Usage\n#### 7.1. Paper\n[\"BAM: Bottleneck Attention Module\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06514.pdf)\n\n#### 7.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_a7fa1aac9d36.png)\n\n#### 7.3. Usage Code\n```python\nfrom model.attention.BAM import BAMBlock\nimport torch\n\ninput=torch.randn(50,512,7,7)\nbam = BAMBlock(channel=512,reduction=16,dia_val=2)\noutput=bam(input)\nprint(output.shape)\n\n```\n\n***\n\n### 8. ECA Attention Usage\n#### 8.1. Paper\n[\"ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.03151.pdf)\n\n#### 8.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_27640d0e5c4d.png)\n\n#### 8.3. Usage Code\n```python\nfrom model.attention.ECAAttention import ECAAttention\nimport torch\n\ninput=torch.randn(50,512,7,7)\neca = ECAAttention(kernel_size=3)\noutput=eca(input)\nprint(output.shape)\n\n```\n\n***\n\n### 9. DANet Attention Usage\n#### 9.1. Paper\n[\"Dual Attention Network for Scene Segmentation\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.02983.pdf)\n\n#### 9.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_c273ac0ded35.png)\n\n#### 9.3. Usage Code\n```python\nfrom model.attention.DANet import DAModule\nimport torch\n\ninput=torch.randn(50,512,7,7)\ndanet=DAModule(d_model=512,kernel_size=3,H=7,W=7)\nprint(danet(input).shape)\n\n```\n\n***\n\n### 10. Pyramid Split Attention Usage\n\n#### 10.1. Paper\n[\"EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14447.pdf)\n\n#### 10.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_86fae2eb3db5.png)\n\n#### 10.3. Usage Code\n```python\nfrom model.attention.PSA import PSA\nimport torch\n\ninput=torch.randn(50,512,7,7)\npsa = PSA(channel=512,reduction=8)\noutput=psa(input)\nprint(output.shape)\n\n```\n\n***\n\n\n### 11. Efficient Multi-Head Self-Attention Usage\n\n#### 11.1. Paper\n[\"ResT: An Efficient Transformer for Visual Recognition\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.13677)\n\n#### 11.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_884191f141c7.png)\n\n#### 11.3. Usage Code\n```python\n\nfrom model.attention.EMSA import EMSA\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,64,512)\nemsa = EMSA(d_model=512, d_k=512, d_v=512, h=8,H=8,W=8,ratio=2,apply_transform=True)\noutput=emsa(input,input,input)\nprint(output.shape)\n    \n```\n\n***\n\n\n### 12. Shuffle Attention Usage\n\n#### 12.1. Paper\n[\"SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.00240.pdf)\n\n#### 12.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_75161322f3b9.jpg)\n\n#### 12.3. Usage Code\n```python\n\nfrom model.attention.ShuffleAttention import ShuffleAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\ninput=torch.randn(50,512,7,7)\nse = ShuffleAttention(channel=512,G=8)\noutput=se(input)\nprint(output.shape)\n\n    \n```\n\n\n***\n\n\n### 13. MUSE Attention Usage\n\n#### 13.1. Paper\n[\"MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.09483)\n\n#### 13.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_e54a72841fa9.png)\n\n#### 13.3. Usage Code\n```python\nfrom model.attention.MUSEAttention import MUSEAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\ninput=torch.randn(50,49,512)\nsa = MUSEAttention(d_model=512, d_k=512, d_v=512, h=8)\noutput=sa(input,input,input)\nprint(output.shape)\n\n```\n\n***\n\n\n### 14. SGE Attention Usage\n\n#### 14.1. Paper\n[Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.09646.pdf)\n\n#### 14.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_35313e77ea8b.png)\n\n#### 14.3. Usage Code\n```python\nfrom model.attention.SGE import SpatialGroupEnhance\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\nsge = SpatialGroupEnhance(groups=8)\noutput=sge(input)\nprint(output.shape)\n\n```\n\n***\n\n\n### 15. A2 Attention Usage\n\n#### 15.1. Paper\n[A2-Nets: Double Attention Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11579.pdf)\n\n#### 15.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ee4fac013777.png)\n\n#### 15.3. Usage Code\n```python\nfrom model.attention.A2Atttention import DoubleAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\na2 = DoubleAttention(512,128,128,True)\noutput=a2(input)\nprint(output.shape)\n\n```\n\n\n\n### 16. AFT Attention Usage\n\n#### 16.1. Paper\n[An Attention Free Transformer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14103v1.pdf)\n\n#### 16.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_6da7a23607ec.jpg)\n\n#### 16.3. Usage Code\n```python\nfrom model.attention.AFT import AFT_FULL\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,49,512)\naft_full = AFT_FULL(d_model=512, n=49)\noutput=aft_full(input)\nprint(output.shape)\n\n```\n\n\n\n\n\n\n### 17. Outlook Attention Usage\n\n#### 17.1. Paper\n\n\n[VOLO: Vision Outlooker for Visual Recognition\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112)\n\n\n#### 17.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_51af1ff37183.png)\n\n#### 17.3. Usage Code\n```python\nfrom model.attention.OutlookAttention import OutlookAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,28,28,512)\noutlook = OutlookAttention(dim=512)\noutput=outlook(input)\nprint(output.shape)\n\n```\n\n\n***\n\n\n\n\n\n\n### 18. ViP Attention Usage\n\n#### 18.1. Paper\n\n\n[Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12368)\n\n\n#### 18.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_eba677a6c7b5.png)\n\n#### 18.3. Usage Code\n```python\n\nfrom model.attention.ViP import WeightedPermuteMLP\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(64,8,8,512)\nseg_dim=8\nvip=WeightedPermuteMLP(512,seg_dim)\nout=vip(input)\nprint(out.shape)\n\n```\n\n\n***\n\n\n\n\n\n### 19. CoAtNet Attention Usage\n\n#### 19.1. Paper\n\n\n[CoAtNet: Marrying Convolution and Attention for All Data Sizes\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04803) \n\n\n#### 19.2. Overview\nNone\n\n\n#### 19.3. Usage Code\n```python\n\nfrom model.attention.CoAtNet import CoAtNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,3,224,224)\nmbconv=CoAtNet(in_ch=3,image_size=224)\nout=mbconv(input)\nprint(out.shape)\n\n```\n\n\n***\n\n\n\n\n\n\n### 20. HaloNet Attention Usage\n\n#### 20.1. Paper\n\n\n[Scaling Local Self-Attention for Parameter Efficient Visual Backbones\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.12731.pdf) \n\n\n#### 20.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_d97769fd6847.png)\n\n#### 20.3. Usage Code\n```python\n\nfrom model.attention.HaloAttention import HaloAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,512,8,8)\nhalo = HaloAttention(dim=512,\n    block_size=2,\n    halo_size=1,)\noutput=halo(input)\nprint(output.shape)\n\n```\n\n\n***\n\n### 21. Polarized Self-Attention Usage\n\n#### 21.1. Paper\n\n[Polarized Self-Attention: Towards High-quality Pixel-wise Regression\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00782)  \n\n\n#### 21.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_c78992e6d3d5.png)\n\n#### 21.3. Usage Code\n```python\n\nfrom model.attention.PolarizedSelfAttention import ParallelPolarizedSelfAttention,SequentialPolarizedSelfAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,512,7,7)\npsa = SequentialPolarizedSelfAttention(channel=512)\noutput=psa(input)\nprint(output.shape)\n\n\n```\n\n\n***\n\n\n### 22. CoTAttention Usage\n\n#### 22.1. Paper\n\n[Contextual Transformer Networks for Visual Recognition---arXiv 2021.07.26](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12292) \n\n\n#### 22.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b9f9b989b30b.png)\n\n#### 22.3. Usage Code\n```python\n\nfrom model.attention.CoTAttention import CoTAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\ncot = CoTAttention(dim=512,kernel_size=3)\noutput=cot(input)\nprint(output.shape)\n\n\n\n```\n\n***\n\n\n### 23. Residual Attention Usage\n\n#### 23.1. Paper\n\n[Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02456) \n\n\n#### 23.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_4390afb38c44.png)\n\n#### 23.3. Usage Code\n```python\n\nfrom model.attention.ResidualAttention import ResidualAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\nresatt = ResidualAttention(channel=512,num_class=1000,la=0.2)\noutput=resatt(input)\nprint(output.shape)\n\n\n\n```\n\n***\n\n\n\n### 24. S2 Attention Usage\n\n#### 24.1. Paper\n\n[S²-MLPv2: Improved Spatial-Shift MLP Architecture for Vision---arXiv 2021.08.02](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.01072) \n\n\n#### 24.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_0c47cd693dcf.png)\n\n#### 24.3. Usage Code\n```python\nfrom model.attention.S2Attention import S2Attention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\ns2att = S2Attention(channels=512)\noutput=s2att(input)\nprint(output.shape)\n\n```\n\n***\n\n\n\n### 25. GFNet Attention Usage\n\n#### 25.1. Paper\n\n[Global Filter Networks for Image Classification---arXiv 2021.07.01](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00645) \n\n\n#### 25.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_e5055ead1bc9.jpg)\n\n#### 25.3. Usage Code - Implemented by [Wenliang Zhao (Author)](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=lyPWvuEAAAAJ&hl=en)\n\n```python\nfrom model.attention.gfnet import GFNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nx = torch.randn(1, 3, 224, 224)\ngfnet = GFNet(embed_dim=384, img_size=224, patch_size=16, num_classes=1000)\nout = gfnet(x)\nprint(out.shape)\n\n```\n\n***\n\n\n### 26. TripletAttention Usage\n\n#### 26.1. Paper\n\n[Rotate to Attend: Convolutional Triplet Attention Module---CVPR 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03045) \n\n#### 26.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b70ec45201f6.png)\n\n#### 26.3. Usage Code - Implemented by [digantamisra98](https:\u002F\u002Fgithub.com\u002Fdigantamisra98)\n\n```python\nfrom model.attention.TripletAttention import TripletAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\ninput=torch.randn(50,512,7,7)\ntriplet = TripletAttention()\noutput=triplet(input)\nprint(output.shape)\n```\n\n\n***\n\n\n### 27. Coordinate Attention Usage\n\n#### 27.1. Paper\n\n[Coordinate Attention for Efficient Mobile Network Design---CVPR 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n\n#### 27.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_630691aa3c11.png)\n\n#### 27.3. Usage Code - Implemented by [Andrew-Qibin](https:\u002F\u002Fgithub.com\u002FAndrew-Qibin)\n\n```python\nfrom model.attention.CoordAttention import CoordAtt\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninp=torch.rand([2, 96, 56, 56])\ninp_dim, oup_dim = 96, 96\nreduction=32\n\ncoord_attention = CoordAtt(inp_dim, oup_dim, reduction=reduction)\noutput=coord_attention(inp)\nprint(output.shape)\n```\n\n***\n\n\n### 28. MobileViT Attention Usage\n\n#### 28.1. Paper\n\n[MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2021.10.05](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n\n#### 28.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_7eb9538809da.png)\n\n#### 28.3. Usage Code\n\n```python\nfrom model.attention.MobileViTAttention import MobileViTAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    m=MobileViTAttention()\n    input=torch.randn(1,3,49,49)\n    output=m(input)\n    print(output.shape)  #output:(1,3,49,49)\n    \n```\n\n***\n\n\n### 29. ParNet Attention Usage\n\n#### 29.1. Paper\n\n[Non-deep Networks---ArXiv 2021.10.20](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07641)\n\n\n#### 29.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_361866667326.png)\n\n#### 29.3. Usage Code\n\n```python\nfrom model.attention.ParNetAttention import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,512,7,7)\n    pna = ParNetAttention(channel=512)\n    output=pna(input)\n    print(output.shape) #50,512,7,7\n    \n```\n\n***\n\n\n### 30. UFO Attention Usage\n\n#### 30.1. Paper\n\n[UFO-ViT: High Performance Linear Vision Transformer without Softmax---ArXiv 2021.09.29](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07641)\n\n\n#### 30.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_51620dd3b70e.png)\n\n#### 30.3. Usage Code\n\n```python\nfrom model.attention.UFOAttention import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    ufo = UFOAttention(d_model=512, d_k=512, d_v=512, h=8)\n    output=ufo(input,input,input)\n    print(output.shape) #[50, 49, 512]\n    \n```\n\n-\n\n### 31. ACmix Attention Usage\n\n#### 31.1. Paper\n\n[On the Integration of Self-Attention and Convolution](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14556.pdf)\n\n#### 31.2. Usage Code\n\n```python\nfrom model.attention.ACmix import ACmix\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(50,256,7,7)\n    acmix = ACmix(in_planes=256, out_planes=256)\n    output=acmix(input)\n    print(output.shape)\n    \n```\n\n### 32. MobileViTv2 Attention Usage\n\n#### 32.1. Paper\n\n[Separable Self-attention for Mobile Vision Transformers---ArXiv 2022.06.06](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02680)\n\n\n#### 32.2. Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_f5b3b9eec5e9.png)\n\n#### 32.3. Usage Code\n\n```python\nfrom model.attention.MobileViTv2Attention import MobileViTv2Attention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    sa = MobileViTv2Attention(d_model=512)\n    output=sa(input)\n    print(output.shape)\n    \n```\n\n### 33. DAT Attention Usage\n\n#### 33.1. Paper\n\n[Vision Transformer with Deformable Attention---CVPR2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.00520)\n\n#### 33.2. Usage Code\n\n```python\nfrom model.attention.DAT import DAT\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = DAT(\n        img_size=224,\n        patch_size=4,\n        num_classes=1000,\n        expansion=4,\n        dim_stem=96,\n        dims=[96, 192, 384, 768],\n        depths=[2, 2, 6, 2],\n        stage_spec=[['L', 'S'], ['L', 'S'], ['L', 'D', 'L', 'D', 'L', 'D'], ['L', 'D']],\n        heads=[3, 6, 12, 24],\n        window_sizes=[7, 7, 7, 7] ,\n        groups=[-1, -1, 3, 6],\n        use_pes=[False, False, True, True],\n        dwc_pes=[False, False, False, False],\n        strides=[-1, -1, 1, 1],\n        sr_ratios=[-1, -1, -1, -1],\n        offset_range_factor=[-1, -1, 2, 2],\n        no_offs=[False, False, False, False],\n        fixed_pes=[False, False, False, False],\n        use_dwc_mlps=[False, False, False, False],\n        use_conv_patches=False,\n        drop_rate=0.0,\n        attn_drop_rate=0.0,\n        drop_path_rate=0.2,\n    )\n    output=model(input)\n    print(output[0].shape)\n    \n```\n\n### 34. CrossFormer Attention Usage\n\n#### 34.1. Paper\n\n[CROSSFORMER: A VERSATILE VISION TRANSFORMER HINGING ON CROSS-SCALE ATTENTION---ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.00154.pdf)\n\n#### 34.2. Usage Code\n\n```python\nfrom model.attention.Crossformer import CrossFormer\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CrossFormer(img_size=224,\n        patch_size=[4, 8, 16, 32],\n        in_chans= 3,\n        num_classes=1000,\n        embed_dim=48,\n        depths=[2, 2, 6, 2],\n        num_heads=[3, 6, 12, 24],\n        group_size=[7, 7, 7, 7],\n        mlp_ratio=4.,\n        qkv_bias=True,\n        qk_scale=None,\n        drop_rate=0.0,\n        drop_path_rate=0.1,\n        ape=False,\n        patch_norm=True,\n        use_checkpoint=False,\n        merge_size=[[2, 4], [2,4], [2, 4]]\n    )\n    output=model(input)\n    print(output.shape)\n    \n```\n\n### 35. MOATransformer Attention Usage\n\n#### 35.1. Paper\n\n[Aggregating Global Features into Local Vision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12903)\n\n#### 35.2. Usage Code\n\n```python\nfrom model.attention.MOATransformer import MOATransformer\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = MOATransformer(\n        img_size=224,\n        patch_size=4,\n        in_chans=3,\n        num_classes=1000,\n        embed_dim=96,\n        depths=[2, 2, 6],\n        num_heads=[3, 6, 12],\n        window_size=14,\n        mlp_ratio=4.,\n        qkv_bias=True,\n        qk_scale=None,\n        drop_rate=0.0,\n        drop_path_rate=0.1,\n        ape=False,\n        patch_norm=True,\n        use_checkpoint=False\n    )\n    output=model(input)\n    print(output.shape)\n    \n```\n\n### 36. CrissCrossAttention Attention Usage\n\n#### 36.1. Paper\n\n[CCNet: Criss-Cross Attention for Semantic Segmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11721)\n\n#### 36.2. Usage Code\n\n```python\nfrom model.attention.CrissCrossAttention import CrissCrossAttention\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(3, 64, 7, 7)\n    model = CrissCrossAttention(64)\n    outputs = model(input)\n    print(outputs.shape)\n    \n```\n\n### 37. Axial_attention Attention Usage\n\n#### 37.1. Paper\n\n[Axial Attention in Multidimensional Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.12180)\n\n#### 37.2. Usage Code\n\n```python\nfrom model.attention.Axial_attention import AxialImageTransformer\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(3, 128, 7, 7)\n    model = AxialImageTransformer(\n        dim = 128,\n        depth = 12,\n        reversible = True\n    )\n    outputs = model(input)\n    print(outputs.shape)\n    \n```\n\n***\n\n\n# Backbone Series\n\n- Pytorch implementation of [\"Deep Residual Learning for Image Recognition---CVPR2016 Best Paper\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.03385.pdf)\n\n- Pytorch implementation of [\"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431v2)\n\n- Pytorch implementation of [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2020.10.05](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n- Pytorch implementation of [Patches Are All You Need?---ICLR2022 (Under Review)](https:\u002F\u002Fopenreview.net\u002Fforum?id=TVHS5Y4dNvM)\n\n- Pytorch implementation of [Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer---ArXiv 2021.06.07](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03650)\n\n- Pytorch implementation of [ConTNet: Why not use convolution and transformer at the same time?---ArXiv 2021.04.27](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13497)\n\n- Pytorch implementation of [Vision Transformers with Hierarchical Attention---ArXiv 2022.06.15](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03180)\n\n- Pytorch implementation of [Co-Scale Conv-Attentional Image Transformers---ArXiv 2021.08.26](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06399)\n\n- Pytorch implementation of [Conditional Positional Encodings for Vision Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10882)\n\n- Pytorch implementation of [Rethinking Spatial Dimensions of Vision Transformers---ICCV 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16302)\n\n- Pytorch implementation of [CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification---ICCV 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14899)\n\n- Pytorch implementation of [Transformer in Transformer---NeurIPS 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00112)\n\n- Pytorch implementation of [DeepViT: Towards Deeper Vision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11886)\n\n- Pytorch implementation of [Incorporating Convolution Designs into Visual Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11816)\n***\n\n- Pytorch implementation of [ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.10697)\n\n- Pytorch implementation of [Augmenting Convolutional networks with attention-based aggregation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.13692)\n\n- Pytorch implementation of [Going deeper with Image Transformers---ICCV 2021 (Oral)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.17239)\n\n- Pytorch implementation of [Training data-efficient image transformers & distillation through attention---ICML 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.12877)\n\n- Pytorch implementation of [LeViT: a Vision Transformer in ConvNet’s Clothing for Faster Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01136)\n\n- Pytorch implementation of [VOLO: Vision Outlooker for Visual Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112)\n\n- Pytorch implementation of [Container: Context Aggregation Network---NeuIPS 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01401)\n\n- Pytorch implementation of [CMT: Convolutional Neural Networks Meet Vision Transformers---CVPR 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06263)\n\n- Pytorch implementation of [Vision Transformer with Deformable Attention---CVPR 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.00520)\n\n- Pytorch implementation of [EfficientFormer: Vision Transformers at MobileNet Speed](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01191)\n\n- Pytorch implementation of [ConvNeXtV2: Co-designing and Scaling ConvNets with Masked Autoencoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00808)\n\n\n### 1. ResNet Usage\n#### 1.1. Paper\n[\"Deep Residual Learning for Image Recognition---CVPR2016 Best Paper\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.03385.pdf)\n\n#### 1.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_59a77a1e2933.png)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_6c10523cc884.jpg)\n\n#### 1.3. Usage Code\n```python\n\nfrom model.backbone.resnet import ResNet50,ResNet101,ResNet152\nimport torch\nif __name__ == '__main__':\n    input=torch.randn(50,3,224,224)\n    resnet50=ResNet50(1000)\n    # resnet101=ResNet101(1000)\n    # resnet152=ResNet152(1000)\n    out=resnet50(input)\n    print(out.shape)\n\n```\n\n\n### 2. ResNeXt Usage\n#### 2.1. Paper\n\n[\"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431v2)\n\n#### 2.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_931531bc607a.png)\n\n#### 2.3. Usage Code\n```python\n\nfrom model.backbone.resnext import ResNeXt50,ResNeXt101,ResNeXt152\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(50,3,224,224)\n    resnext50=ResNeXt50(1000)\n    # resnext101=ResNeXt101(1000)\n    # resnext152=ResNeXt152(1000)\n    out=resnext50(input)\n    print(out.shape)\n\n\n```\n\n\n\n### 3. MobileViT Usage\n#### 3.1. Paper\n\n[MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2020.10.05](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n#### 3.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_8e7e4dcf8f1f.jpg)\n\n#### 3.3. Usage Code\n```python\n\nfrom model.backbone.MobileViT import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n\n    ### mobilevit_xxs\n    mvit_xxs=mobilevit_xxs()\n    out=mvit_xxs(input)\n    print(out.shape)\n\n    ### mobilevit_xs\n    mvit_xs=mobilevit_xs()\n    out=mvit_xs(input)\n    print(out.shape)\n\n\n    ### mobilevit_s\n    mvit_s=mobilevit_s()\n    out=mvit_s(input)\n    print(out.shape)\n\n```\n\n\n\n\n\n### 4. ConvMixer Usage\n#### 4.1. Paper\n[Patches Are All You Need?---ICLR2022 (Under Review)](https:\u002F\u002Fopenreview.net\u002Fforum?id=TVHS5Y4dNvM)\n#### 4.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_12633cadd598.png)\n\n#### 4.3. Usage Code\n```python\n\nfrom model.backbone.ConvMixer import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    x=torch.randn(1,3,224,224)\n    convmixer=ConvMixer(dim=512,depth=12)\n    out=convmixer(x)\n    print(out.shape)  #[1, 1000]\n\n\n```\n\n### 5. ShuffleTransformer Usage\n#### 5.1. Paper\n[Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.03650.pdf)\n\n#### 5.2. Usage Code\n```python\n\nfrom model.backbone.ShuffleTransformer import ShuffleTransformer\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    sft = ShuffleTransformer()\n    output=sft(input)\n    print(output.shape)\n\n\n```\n\n### 6. ConTNet Usage\n#### 6.1. Paper\n[ConTNet: Why not use convolution and transformer at the same time?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13497)\n\n#### 6.2. Usage Code\n```python\n\nfrom model.backbone.ConTNet import ConTNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == \"__main__\":\n    model = build_model(use_avgdown=True, relative=True, qkv_bias=True, pre_norm=True)\n    input = torch.randn(1, 3, 224, 224)\n    out = model(input)\n    print(out.shape)\n\n\n```\n\n### 7 HATNet Usage\n#### 7.1. Paper\n[Vision Transformers with Hierarchical Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03180)\n\n#### 7.2. Usage Code\n```python\n\nfrom model.backbone.HATNet import HATNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    hat = HATNet(dims=[48, 96, 240, 384], head_dim=48, expansions=[8, 8, 4, 4],\n        grid_sizes=[8, 7, 7, 1], ds_ratios=[8, 4, 2, 1], depths=[2, 2, 6, 3])\n    output=hat(input)\n    print(output.shape)\n\n\n```\n\n### 8 CoaT Usage\n#### 8.1. Paper\n[Co-Scale Conv-Attentional Image Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06399)\n\n#### 8.2. Usage Code\n```python\n\nfrom model.backbone.CoaT import CoaT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CoaT(patch_size=4, embed_dims=[152, 152, 152, 152], serial_depths=[2, 2, 2, 2], parallel_depth=6, num_heads=8, mlp_ratios=[4, 4, 4, 4])\n    output=model(input)\n    print(output.shape) # torch.Size([1, 1000])\n\n```\n\n### 9 PVT Usage\n#### 9.1. Paper\n[PVT v2: Improved Baselines with Pyramid Vision Transformer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.13797.pdf)\n\n#### 9.2. Usage Code\n```python\n\nfrom model.backbone.PVT import PyramidVisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = PyramidVisionTransformer(\n        patch_size=4, embed_dims=[64, 128, 320, 512], num_heads=[1, 2, 5, 8], mlp_ratios=[8, 8, 4, 4], qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[2, 2, 2, 2], sr_ratios=[8, 4, 2, 1])\n    output=model(input)\n    print(output.shape)\n\n```\n\n\n### 10 CPVT Usage\n#### 10.1. Paper\n[Conditional Positional Encodings for Vision Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10882)\n\n#### 10.2. Usage Code\n```python\n\nfrom model.backbone.CPVT import CPVTV2\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CPVTV2(\n        patch_size=4, embed_dims=[64, 128, 320, 512], num_heads=[1, 2, 5, 8], mlp_ratios=[8, 8, 4, 4], qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[3, 4, 6, 3], sr_ratios=[8, 4, 2, 1])\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 11 PIT Usage\n#### 11.1. Paper\n[Rethinking Spatial Dimensions of Vision Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16302)\n\n#### 11.2. Usage Code\n```python\n\nfrom model.backbone.PIT import PoolingTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = PoolingTransformer(\n        image_size=224,\n        patch_size=14,\n        stride=7,\n        base_dims=[64, 64, 64],\n        depth=[3, 6, 4],\n        heads=[4, 8, 16],\n        mlp_ratio=4\n    )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 12 CrossViT Usage\n#### 12.1. Paper\n[CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14899)\n\n#### 12.2. Usage Code\n```python\n\nfrom model.backbone.CrossViT import VisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == \"__main__\":\n    input=torch.randn(1,3,224,224)\n    model = VisionTransformer(\n        img_size=[240, 224],\n        patch_size=[12, 16], \n        embed_dim=[192, 384], \n        depth=[[1, 4, 0], [1, 4, 0], [1, 4, 0]],\n        num_heads=[6, 6], \n        mlp_ratio=[4, 4, 1], \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6)\n    )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 13 TnT Usage\n#### 13.1. Paper\n[Transformer in Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00112)\n\n#### 13.2. Usage Code\n```python\n\nfrom model.backbone.TnT import TNT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = TNT(\n        img_size=224, \n        patch_size=16, \n        outer_dim=384, \n        inner_dim=24, \n        depth=12,\n        outer_num_heads=6, \n        inner_num_heads=4, \n        qkv_bias=False,\n        inner_stride=4)\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 14 DViT Usage\n#### 14.1. Paper\n[DeepViT: Towards Deeper Vision Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11886)\n\n#### 14.2. Usage Code\n```python\n\nfrom model.backbone.DViT import DeepVisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = DeepVisionTransformer(\n        patch_size=16, embed_dim=384, \n        depth=[False] * 16, \n        apply_transform=[False] * 0 + [True] * 32, \n        num_heads=12, \n        mlp_ratio=3, \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6),\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 15 CeiT Usage\n#### 15.1. Paper\n[Incorporating Convolution Designs into Visual Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11816)\n\n#### 15.2. Usage Code\n```python\n\nfrom model.backbone.CeiT import CeIT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CeIT(\n        hybrid_backbone=Image2Tokens(),\n        patch_size=4, \n        embed_dim=192, \n        depth=12, \n        num_heads=3, \n        mlp_ratio=4, \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6)\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 16 ConViT Usage\n#### 16.1. Paper\n[ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.10697)\n\n#### 16.2. Usage Code\n```python\n\nfrom model.backbone.ConViT import VisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VisionTransformer(\n        num_heads=16,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6)\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 17 CaiT Usage\n#### 17.1. Paper\n[Going deeper with Image Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.17239)\n\n#### 17.2. Usage Code\n```python\n\nfrom model.backbone.CaiT import CaiT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CaiT(\n        img_size= 224,\n        patch_size=16, \n        embed_dim=192, \n        depth=24, \n        num_heads=4, \n        mlp_ratio=4, \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6),\n        init_scale=1e-5,\n        depth_token_only=2\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 18 PatchConvnet Usage\n#### 18.1. Paper\n[Augmenting Convolutional networks with attention-based aggregation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.13692)\n\n#### 18.2. Usage Code\n```python\n\nfrom model.backbone.PatchConvnet import PatchConvnet\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = PatchConvnet(\n        patch_size=16,\n        embed_dim=384,\n        depth=60,\n        num_heads=1,\n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6),\n        Patch_layer=ConvStem,\n        Attention_block=Conv_blocks_se,\n        depth_token_only=1,\n        mlp_ratio_clstk=3.0,\n    )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 19 DeiT Usage\n#### 19.1. Paper\n[Training data-efficient image transformers & distillation through attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.12877)\n\n#### 19.2. Usage Code\n```python\n\nfrom model.backbone.DeiT import DistilledVisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = DistilledVisionTransformer(\n        patch_size=16, \n        embed_dim=384, \n        depth=12, \n        num_heads=6, \n        mlp_ratio=4, \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6)\n        )\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 20 LeViT Usage\n#### 20.1. Paper\n[LeViT: a Vision Transformer in ConvNet’s Clothing for Faster Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01136)\n\n#### 20.2. Usage Code\n```python\n\nfrom model.backbone.LeViT import *\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    for name in specification:\n        input=torch.randn(1,3,224,224)\n        model = globals()[name](fuse=True, pretrained=False)\n        model.eval()\n        output = model(input)\n        print(output.shape)\n\n```\n\n### 21 VOLO Usage\n#### 21.1. Paper\n[VOLO: Vision Outlooker for Visual Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112)\n\n#### 21.2. Usage Code\n```python\n\nfrom model.backbone.VOLO import VOLO\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VOLO([4, 4, 8, 2],\n                 embed_dims=[192, 384, 384, 384],\n                 num_heads=[6, 12, 12, 12],\n                 mlp_ratios=[3, 3, 3, 3],\n                 downsamples=[True, False, False, False],\n                 outlook_attention=[True, False, False, False ],\n                 post_layers=['ca', 'ca'],\n                 )\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 22 Container Usage\n#### 22.1. Paper\n[Container: Context Aggregation Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01401)\n\n#### 22.2. Usage Code\n```python\n\nfrom model.backbone.Container import VisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VisionTransformer(\n        img_size=[224, 56, 28, 14], \n        patch_size=[4, 2, 2, 2], \n        embed_dim=[64, 128, 320, 512], \n        depth=[3, 4, 8, 3], \n        num_heads=16, \n        mlp_ratio=[8, 8, 4, 4], \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6))\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 23 CMT Usage\n#### 23.1. Paper\n[CMT: Convolutional Neural Networks Meet Vision Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06263)\n\n#### 23.2. Usage Code\n```python\n\nfrom model.backbone.CMT import CMT_Tiny\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CMT_Tiny()\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 24 EfficientFormer Usage\n#### 24.1. Paper\n[EfficientFormer: Vision Transformers at MobileNet Speed](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01191)\n\n#### 24.2. Usage Code\n```python\n\nfrom model.backbone.EfficientFormer import EfficientFormer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = EfficientFormer(\n        layers=EfficientFormer_depth['l1'],\n        embed_dims=EfficientFormer_width['l1'],\n        downsamples=[True, True, True, True],\n        vit_num=1,\n    )\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 25 ConvNeXtV2 Usage\n#### 25.1. Paper\n[ConvNeXtV2: Co-designing and Scaling ConvNets with Masked Autoencoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00808)\n\n#### 25.2. Usage Code\n```python\n\nfrom model.backbone.convnextv2 import convnextv2_atto\nimport torch\nfrom torch import nn\n\nif __name__ == \"__main__\":\n    model = convnextv2_atto()\n    input = torch.randn(1, 3, 224, 224)\n    out = model(input)\n    print(out.shape)\n\n```\n\n\n\n\n\n# MLP Series\n\n- Pytorch implementation of [\"RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition---arXiv 2021.05.05\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01883v1.pdf)\n\n- Pytorch implementation of [\"MLP-Mixer: An all-MLP Architecture for Vision---arXiv 2021.05.17\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01601.pdf)\n\n- Pytorch implementation of [\"ResMLP: Feedforward networks for image classification with data-efficient training---arXiv 2021.05.07\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.03404.pdf)\n\n- Pytorch implementation of [\"Pay Attention to MLPs---arXiv 2021.05.17\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08050)\n\n\n- Pytorch implementation of [\"Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?---arXiv 2021.09.12\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.05422)\n\n### 1. RepMLP Usage\n#### 1.1. Paper\n[\"RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01883v1.pdf)\n\n#### 1.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_1ecc96f94f06.png)\n\n#### 1.3. Usage Code\n```python\nfrom model.mlp.repmlp import RepMLP\nimport torch\nfrom torch import nn\n\nN=4 #batch size\nC=512 #input dim\nO=1024 #output dim\nH=14 #image height\nW=14 #image width\nh=7 #patch height\nw=7 #patch width\nfc1_fc2_reduction=1 #reduction ratio\nfc3_groups=8 # groups\nrepconv_kernels=[1,3,5,7] #kernel list\nrepmlp=RepMLP(C,O,H,W,h,w,fc1_fc2_reduction,fc3_groups,repconv_kernels=repconv_kernels)\nx=torch.randn(N,C,H,W)\nrepmlp.eval()\nfor module in repmlp.modules():\n    if isinstance(module, nn.BatchNorm2d) or isinstance(module, nn.BatchNorm1d):\n        nn.init.uniform_(module.running_mean, 0, 0.1)\n        nn.init.uniform_(module.running_var, 0, 0.1)\n        nn.init.uniform_(module.weight, 0, 0.1)\n        nn.init.uniform_(module.bias, 0, 0.1)\n\n#training result\nout=repmlp(x)\n#inference result\nrepmlp.switch_to_deploy()\ndeployout = repmlp(x)\n\nprint(((deployout-out)**2).sum())\n```\n\n### 2. MLP-Mixer Usage\n#### 2.1. Paper\n[\"MLP-Mixer: An all-MLP Architecture for Vision\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01601.pdf)\n\n#### 2.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b4aa183d6d50.png)\n\n#### 2.3. Usage Code\n```python\nfrom model.mlp.mlp_mixer import MlpMixer\nimport torch\nmlp_mixer=MlpMixer(num_classes=1000,num_blocks=10,patch_size=10,tokens_hidden_dim=32,channels_hidden_dim=1024,tokens_mlp_dim=16,channels_mlp_dim=1024)\ninput=torch.randn(50,3,40,40)\noutput=mlp_mixer(input)\nprint(output.shape)\n```\n\n***\n\n### 3. ResMLP Usage\n#### 3.1. Paper\n[\"ResMLP: Feedforward networks for image classification with data-efficient training\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.03404.pdf)\n\n#### 3.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_10d74a989156.png)\n\n#### 3.3. Usage Code\n```python\nfrom model.mlp.resmlp import ResMLP\nimport torch\n\ninput=torch.randn(50,3,14,14)\nresmlp=ResMLP(dim=128,image_size=14,patch_size=7,class_num=1000)\nout=resmlp(input)\nprint(out.shape) #the last dimention is class_num\n```\n\n***\n\n### 4. gMLP Usage\n#### 4.1. Paper\n[\"Pay Attention to MLPs\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08050)\n\n#### 4.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_be4096c31838.jpg)\n\n#### 4.3. Usage Code\n```python\nfrom model.mlp.g_mlp import gMLP\nimport torch\n\nnum_tokens=10000\nbs=50\nlen_sen=49\nnum_layers=6\ninput=torch.randint(num_tokens,(bs,len_sen)) #bs,len_sen\ngmlp = gMLP(num_tokens=num_tokens,len_sen=len_sen,dim=512,d_ff=1024)\noutput=gmlp(input)\nprint(output.shape)\n```\n\n***\n\n### 5. sMLP Usage\n#### 5.1. Paper\n[\"Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.05422)\n\n#### 5.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_4f1121cdf942.jpg)\n\n#### 5.3. Usage Code\n```python\nfrom model.mlp.sMLP_block import sMLPBlock\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,3,224,224)\n    smlp=sMLPBlock(h=224,w=224)\n    out=smlp(input)\n    print(out.shape)\n```\n\n### 6. vip-mlp Usage\n#### 6.1. Paper\n[\"Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12368)\n\n#### 6.2. Usage Code\n```python\nfrom model.mlp.vip-mlp import VisionPermutator\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VisionPermutator(\n        layers=[4, 3, 8, 3], \n        embed_dims=[384, 384, 384, 384], \n        patch_size=14, \n        transitions=[False, False, False, False],\n        segment_dim=[16, 16, 16, 16], \n        mlp_ratios=[3, 3, 3, 3], \n        mlp_fn=WeightedPermuteMLP\n    )\n    output=model(input)\n    print(output.shape)\n```\n\n\n# Re-Parameter Series\n\n- Pytorch implementation of [\"RepVGG: Making VGG-style ConvNets Great Again---CVPR2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03697)\n\n- Pytorch implementation of [\"ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks---ICCV2019\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03930)\n\n- Pytorch implementation of [\"Diverse Branch Block: Building a Convolution as an Inception-like Unit---CVPR2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13425)\n\n\n***\n\n### 1. RepVGG Usage\n#### 1.1. Paper\n[\"RepVGG: Making VGG-style ConvNets Great Again\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03697)\n\n#### 1.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_81a02a397a95.png)\n\n#### 1.3. Usage Code\n```python\n\nfrom model.rep.repvgg import RepBlock\nimport torch\n\n\ninput=torch.randn(50,512,49,49)\nrepblock=RepBlock(512,512)\nrepblock.eval()\nout=repblock(input)\nrepblock._switch_to_deploy()\nout2=repblock(input)\nprint('difference between vgg and repvgg')\nprint(((out2-out)**2).sum())\n```\n\n\n\n***\n\n### 2. ACNet Usage\n#### 2.1. Paper\n[\"ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03930)\n\n#### 2.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_35f66dcfbd64.png)\n\n#### 2.3. Usage Code\n```python\nfrom model.rep.acnet import ACNet\nimport torch\nfrom torch import nn\n\ninput=torch.randn(50,512,49,49)\nacnet=ACNet(512,512)\nacnet.eval()\nout=acnet(input)\nacnet._switch_to_deploy()\nout2=acnet(input)\nprint('difference:')\nprint(((out2-out)**2).sum())\n\n```\n\n\n\n***\n\n### 2. Diverse Branch Block Usage\n#### 2.1. Paper\n[\"Diverse Branch Block: Building a Convolution as an Inception-like Unit\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13425)\n\n#### 2.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_2b128a19cd34.png)\n\n#### 2.3. Usage Code\n##### 2.3.1 Transform I\n```python\nfrom model.rep.ddb import transI_conv_bn\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n#conv+bn\nconv1=nn.Conv2d(64,64,3,padding=1)\nbn1=nn.BatchNorm2d(64)\nbn1.eval()\nout1=bn1(conv1(input))\n\n#conv_fuse\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transI_conv_bn(conv1,bn1)\nout2=conv_fuse(input)\n\nprint(\"difference:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.2 Transform II\n```python\nfrom model.rep.ddb import transII_conv_branch\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#conv+conv\nconv1=nn.Conv2d(64,64,3,padding=1)\nconv2=nn.Conv2d(64,64,3,padding=1)\nout1=conv1(input)+conv2(input)\n\n#conv_fuse\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transII_conv_branch(conv1,conv2)\nout2=conv_fuse(input)\n\nprint(\"difference:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.3 Transform III\n```python\nfrom model.rep.ddb import transIII_conv_sequential\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#conv+conv\nconv1=nn.Conv2d(64,64,1,padding=0,bias=False)\nconv2=nn.Conv2d(64,64,3,padding=1,bias=False)\nout1=conv2(conv1(input))\n\n\n#conv_fuse\nconv_fuse=nn.Conv2d(64,64,3,padding=1,bias=False)\nconv_fuse.weight.data=transIII_conv_sequential(conv1,conv2)\nout2=conv_fuse(input)\n\nprint(\"difference:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.4 Transform IV\n```python\nfrom model.rep.ddb import transIV_conv_concat\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#conv+conv\nconv1=nn.Conv2d(64,32,3,padding=1)\nconv2=nn.Conv2d(64,32,3,padding=1)\nout1=torch.cat([conv1(input),conv2(input)],dim=1)\n\n#conv_fuse\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transIV_conv_concat(conv1,conv2)\nout2=conv_fuse(input)\n\nprint(\"difference:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.5 Transform V\n```python\nfrom model.rep.ddb import transV_avg\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\navg=nn.AvgPool2d(kernel_size=3,stride=1)\nout1=avg(input)\n\nconv=transV_avg(64,3)\nout2=conv(input)\n\nprint(\"difference:\",((out2-out1)**2).sum().item())\n```\n\n\n##### 2.3.6 Transform VI\n```python\nfrom model.rep.ddb import transVI_conv_scale\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#conv+conv\nconv1x1=nn.Conv2d(64,64,1)\nconv1x3=nn.Conv2d(64,64,(1,3),padding=(0,1))\nconv3x1=nn.Conv2d(64,64,(3,1),padding=(1,0))\nout1=conv1x1(input)+conv1x3(input)+conv3x1(input)\n\n#conv_fuse\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transVI_conv_scale(conv1x1,conv1x3,conv3x1)\nout2=conv_fuse(input)\n\nprint(\"difference:\",((out2-out1)**2).sum().item())\n```\n\n\n\n\n\n# Convolution Series\n\n- Pytorch implementation of [\"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications---CVPR2017\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861)\n\n- Pytorch implementation of [\"Efficientnet: Rethinking model scaling for convolutional neural networks---PMLR2019\"](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftan19a.html)\n\n- Pytorch implementation of [\"Involution: Inverting the Inherence of Convolution for Visual Recognition---CVPR2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06255)\n\n- Pytorch implementation of [\"Dynamic Convolution: Attention over Convolution Kernels---CVPR2020 Oral\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.03458)\n\n- Pytorch implementation of [\"CondConv: Conditionally Parameterized Convolutions for Efficient Inference---NeurIPS2019\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04971)\n\n***\n\n### 1. Depthwise Separable Convolution Usage\n#### 1.1. Paper\n[\"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861)\n\n#### 1.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_e9b292b9f865.png)\n\n#### 1.3. Usage Code\n```python\nfrom model.conv.DepthwiseSeparableConvolution import DepthwiseSeparableConvolution\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,3,224,224)\ndsconv=DepthwiseSeparableConvolution(3,64)\nout=dsconv(input)\nprint(out.shape)\n```\n\n***\n\n\n### 2. MBConv Usage\n#### 2.1. Paper\n[\"Efficientnet: Rethinking model scaling for convolutional neural networks\"](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftan19a.html)\n\n#### 2.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_9fe195963337.jpg)\n\n#### 2.3. Usage Code\n```python\nfrom model.conv.MBConv import MBConvBlock\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,3,224,224)\nmbconv=MBConvBlock(ksize=3,input_filters=3,output_filters=512,image_size=224)\nout=mbconv(input)\nprint(out.shape)\n\n\n```\n\n***\n\n\n### 3. Involution Usage\n#### 3.1. Paper\n[\"Involution: Inverting the Inherence of Convolution for Visual Recognition\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06255)\n\n#### 3.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_af9656a72a2d.png)\n\n#### 3.3. Usage Code\n```python\nfrom model.conv.Involution import Involution\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,4,64,64)\ninvolution=Involution(kernel_size=3,in_channel=4,stride=2)\nout=involution(input)\nprint(out.shape)\n```\n\n***\n\n\n### 4. DynamicConv Usage\n#### 4.1. Paper\n[\"Dynamic Convolution: Attention over Convolution Kernels\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.03458)\n\n#### 4.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_54c450055ecd.png)\n\n#### 4.3. Usage Code\n```python\nfrom model.conv.DynamicConv import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(2,32,64,64)\n    m=DynamicConv(in_planes=32,out_planes=64,kernel_size=3,stride=1,padding=1,bias=False)\n    out=m(input)\n    print(out.shape) # 2,32,64,64\n\n```\n\n***\n\n\n### 5. CondConv Usage\n#### 5.1. Paper\n[\"CondConv: Conditionally Parameterized Convolutions for Efficient Inference\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04971)\n\n#### 5.2. Overview\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_56f0d9379400.png)\n\n#### 5.3. Usage Code\n```python\nfrom model.conv.CondConv import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\n\n\n\nif __name__ == '__main__':\n    input=torch.randn(2,32,64,64)\n    m=CondConv(in_planes=32,out_planes=64,kernel_size=3,stride=1,padding=1,bias=False)\n    out=m(input)\n    print(out.shape)\n\n```\n\n\n\n## 其他项目推荐\n\n-------\n\n🔥🔥🔥 重磅！！！作为项目补充，更多论文层面的解析，可以关注新开源的项目 **[FightingCV-Paper-Reading](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FFightingCV-Paper-Reading)** ，里面汇集和整理了各大顶会顶刊的论文解析\n\n\n\n🔥🔥🔥重磅！！！ 最近为大家整理了网上的各种AI相关的视频教程和必读论文 **[FightingCV-Course\n](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FFightingCV-Course)**\n\n\n🔥🔥🔥 重磅！！！最近全新开源了一个 **[YOLOAir](https:\u002F\u002Fgithub.com\u002Fiscyy\u002Fyoloair)** 目标检测代码库 ，里面集成了多种YOLO模型，包括YOLOv5, YOLOv7,YOLOR, YOLOX,YOLOv4, YOLOv3以及其他YOLO模型，还包括多种现有Attention机制。\n\n\n🔥🔥🔥 **ECCV2022论文汇总：[ECCV2022-Paper-List](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FECCV2022-Paper-List\u002Fblob\u002Fmaster\u002FREADME.md)**\n\n\n\u003C!-- ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_f560de3b1c3b.png) -->\n","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_2b0c5abc6cfa.gif\" height=\"200\" width=\"400\"\u002F>\n\n简体中文 | [English](.\u002FREADME_EN.md)\n\n# FightingCV 代码库， 包含 [***注意力***](#attention-series),[***骨干网络***](#backbone-series), [***MLP***](#mlp-series), [***重参数化***](#re-parameter-series), [**卷积**](#convolution-series)\n\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Ffightingcv-v0.0.1-brightgreen)\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython->=v3.0-blue)\n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpytorch->=v1.4-red)\n\n\u003C!--\n-------\n*如果这个项目对您有帮助，请记得给个***星***。*\n\n*别忘了***关注***我，以便及时了解项目更新。*\n\n-->\n\n\n\u003C!--\n\n\n大家好，我是小马🚀🚀🚀\n\n***对于小白（像我一样）：***\n最近在阅读论文时会发现一个问题：有时候论文的核心思想非常简单，核心代码可能只有十几行。然而，当我打开作者发布的源码时，却发现这些模块被嵌入到了分类、检测、分割等任务框架中，导致代码冗长复杂。对于不熟悉特定任务框架的我来说，**很难找到核心代码**，这使得我对论文和网络结构的理解产生了一定困难。\n\n***对于进阶者（像您一样）：***\n如果把卷积、全连接层、RNN等基本单元看作是小小的乐高积木，而Transformer、ResNet等结构则像是已经搭建好的乐高城堡。那么本项目提供的模块就是一个个具有完整语义信息的乐高组件。**让科研工作者们避免重复造轮子**，只需思考如何利用这些“乐高组件”，搭建出更多丰富多彩的作品。\n\n***对于大神（也许就是您）：***\n能力有限，**请勿随意批评**！！！\n\n***对于所有人：***\n本项目致力于打造一个既能**让深度学习初学者也能轻松理解**，又能**服务于科研和工业社区**的代码库。\n\n-->\n\n\u003C!--\n\n作为[**FightingCV公众号**](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fm9RiivbbDPdjABsTd6q8FA)和 **[FightingCV-Paper-Reading](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FFightingCV-Paper-Reading)** 的补充，本项目的宗旨是从代码角度出发，实现🚀**让世界上没有难读的论文**🚀。\n\n\n（同时也非常欢迎各位科研工作者将自己的工作的核心代码整理到本项目中，推动科研社区的发展，会在readme中注明代码的作者~）\n\n\n\n\n## 技术交流 \u003Cimg title=\"\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ba39e2a5e577.png\" alt=\"\" width=\"20\">\n\n欢迎大家关注公众号：**FightingCV**\n\n\n\n| FightingCV公众号 | 小助手微信 （备注【**公司\u002F学校+方向+ID**】）|\n:-------------------------:|:-------------------------:\n\u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_1d2f301c90fb.jpg' width='200px'>  |  \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b6bb214afce7.jpg' width='200px'> \n\n- 公众号**每天**都会分享**论文、算法和代码的干货**哦~\n\n- **交流群每天也会分享最新的论文及解析**，欢迎大家一起来**学习交流**哈~~~\n\n- 强烈推荐大家关注[**知乎**](https:\u002F\u002Fwww.zhihu.com\u002Fpeople\u002Fjason-14-58-38\u002Fposts)账号和[**FightingCV公众号**](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fm9RiivbbDPdjABsTd6q8FA)，可以快速了解到最新优质的干货资源。\n\n\n-------\n\n\n-->\n\n## 🌟 星级历史\n\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_8918d6f0a640.png)](https:\u002F\u002Fstar-history.com\u002F#xmu-xiaoma666\u002FExternal-Attention-pytorch&Date)\n\n## 使用\n\n### 安装\n\n 直接通过 pip 安装\n\n  ```shell\n  pip install fightingcv-attention\n  ```\n\n\n或克隆该仓库\n\n  ```shell\n  git clone https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch.git\n\n  cd External-Attention-pytorch\n  ```\n\n### 演示\n\n#### 使用 pip 方式\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n# 使用 pip 方式\n\nfrom fightingcv_attention.attention.MobileViTv2Attention import *\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    sa = MobileViTv2Attention(d_model=512)\n    output=sa(input)\n    print(output.shape)\n```\n\n - pip包 内置模块使用参考: [fightingcv-attention 说明文档](.\u002FREADME_pip.md)\n\n#### 使用 git 方式\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n# 与 pip方式 区别在于 将 `fightingcv_attention` 替换 `model`\n\nfrom model.attention.MobileViTv2Attention import *\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    sa = MobileViTv2Attention(d_model=512)\n    output=sa(input)\n    print(output.shape)\n```\n\n-------\n\n# 目录\n\n- [注意力机制系列](#attention-series)\n    - [1. 外部注意力使用](#1-external-attention-usage)\n\n    - [2. 自注意力使用](#2-self-attention-usage)\n\n    - [3. 简化自注意力使用](#3-simplified-self-attention-usage)\n\n    - [4. 激励挤压注意力使用](#4-squeeze-and-excitation-attention-usage)\n\n    - [5. SK注意力使用](#5-sk-attention-usage)\n\n    - [6. CBAM注意力使用](#6-cbam-attention-usage)\n\n    - [7. BAM注意力使用](#7-bam-attention-usage)\n  \n    - [8. ECA注意力使用](#8-eca-attention-usage)\n\n    - [9. DANet注意力使用](#9-danet-attention-usage)\n\n    - [10. 金字塔分割注意力（PSA）使用](#10-Pyramid-Split-Attention-Usage)\n\n    - [11. 高效多头自注意力（EMSA）使用](#11-Efficient-Multi-Head-Self-Attention-Usage)\n\n    - [12. 洗牌注意力使用](#12-Shuffle-Attention-Usage)\n  \n    - [13. MUSE注意力使用](#13-MUSE-Attention-Usage)\n  \n    - [14. SGE注意力使用](#14-SGE-Attention-Usage)\n\n    - [15. A2注意力使用](#15-A2-Attention-Usage)\n\n    - [16. AFT注意力使用](#16-AFT-Attention-Usage)\n\n    - [17. Outlook注意力使用](#17-Outlook-Attention-Usage)\n\n    - [18. ViP注意力使用](#18-ViP-Attention-Usage)\n\n    - [19. CoAtNet注意力使用](#19-CoAtNet-Attention-Usage)\n\n    - [20. HaloNet注意力使用](#20-HaloNet-Attention-Usage)\n\n    - [21. 极化自注意力使用](#21-Polarized-Self-Attention-Usage)\n\n    - [22. CoTAttention使用](#22-CoTAttention-Usage)\n\n    - [23. 残差注意力使用](#23-Residual-Attention-Usage)\n  \n    - [24. S2注意力使用](#24-S2-Attention-Usage)\n\n    - [25. GFNet注意力使用](#25-GFNet-Attention-Usage)\n\n    - [26. 三元组注意力使用](#26-TripletAttention-Usage)\n\n    - [27. 坐标注意力使用](#27-Coordinate-Attention-Usage)\n\n    - [28. MobileViT注意力使用](#28-MobileViT-Attention-Usage)\n\n    - [29. ParNet注意力使用](#29-ParNet-Attention-Usage)\n\n    - [30. UFO注意力使用](#30-UFO-Attention-Usage)\n\n    - [31. ACmix注意力使用](#31-Acmix-Attention-Usage)\n  \n    - [32. MobileViTv2注意力使用](#32-MobileViTv2-Attention-Usage)\n\n    - [33. DAT注意力使用](#33-DAT-Attention-Usage)\n\n    - [34. CrossFormer注意力使用](#34-CrossFormer-Attention-Usage)\n\n    - [35. MOATransformer注意力使用](#35-MOATransformer-Attention-Usage)\n\n    - [36. CrissCrossAttention注意力使用](#36-CrissCrossAttention-Attention-Usage)\n\n    - [37. Axial_attention注意力使用](#37-Axial_attention-Attention-Usage)\n\n- [骨干网络系列](#Backbone-series)\n\n    - [1. ResNet使用](#1-ResNet-Usage)\n\n    - [2. ResNeXt使用](#2-ResNeXt-Usage)\n\n    - [3. MobileViT使用](#3-MobileViT-Usage)\n\n    - [4. ConvMixer使用](#4-ConvMixer-Usage)\n\n    - [5. ShuffleTransformer使用](#5-ShuffleTransformer-Usage)\n\n    - [6. ConTNet使用](#6-ConTNet-Usage)\n\n    - [7. HATNet使用](#7-HATNet-Usage)\n\n    - [8. CoaT使用](#8-CoaT-Usage)\n\n    - [9. PVT使用](#9-PVT-Usage)\n\n    - [10. CPVT使用](#10-CPVT-Usage)\n\n    - [11. PIT使用](#11-PIT-Usage)\n\n    - [12. CrossViT使用](#12-CrossViT-Usage)\n\n    - [13. TnT使用](#13-TnT-Usage)\n\n    - [14. DViT使用](#14-DViT-Usage)\n\n    - [15. CeiT使用](#15-CeiT-Usage)\n\n    - [16. ConViT使用](#16-ConViT-Usage)\n\n    - [17. CaiT使用](#17-CaiT-Usage)\n\n    - [18. PatchConvnet使用](#18-PatchConvnet-Usage)\n\n    - [19. DeiT使用](#19-DeiT-Usage)\n\n    - [20. LeViT使用](#20-LeViT-Usage)\n\n    - [21. VOLO使用](#21-VOLO-Usage)\n  \n    - [22. Container使用](#22-Container-Usage)\n\n    - [23. CMT使用](#23-CMT-Usage)\n\n    - [24. EfficientFormer使用](#24-EfficientFormer-Usage)\n\n    - [25. ConvNeXtV2使用](#25-ConvNeXtV2-Usage)\n\n\n\n- [MLP系列](#mlp-series)\n\n    - [1. RepMLP使用](#1-RepMLP-Usage)\n\n    - [2. MLP-Mixer使用](#2-MLP-Mixer-Usage)\n\n    - [3. ResMLP使用](#3-ResMLP-Usage)\n\n    - [4. gMLP使用](#4-gMLP-Usage)\n\n    - [5. sMLP使用](#5-sMLP-Usage)\n\n    - [6. vip-mlp使用](#6-vip-mlp-Usage)\n\n- [重参数化（ReP）系列](#Re-Parameter-series)\n\n    - [1. RepVGG使用](#1-RepVGG-Usage)\n\n    - [2. ACNet使用](#2-ACNet-Usage)\n\n    - [3. 多分支模块（DDB）使用](#3-Diverse-Branch-Block-Usage)\n\n- [卷积系列](#Convolution-series)\n\n    - [1. 深度可分离卷积使用](#1-Depthwise-Separable-Convolution-Usage)\n\n    - [2. MBConv使用](#2-MBConv-Usage)\n\n    - [3. Involution使用](#3-Involution-Usage)\n\n    - [4. 动态卷积使用](#4-DynamicConv-Usage)\n\n    - [5. CondConv使用](#5-CondConv-Usage)\n\n***\n\n# 注意力系列\n\n- [\"超越自注意力：用于视觉任务的双线性层外部注意力---arXiv 2021.05.05\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.02358) 的 PyTorch 实现\n\n- [\"Attention Is All You Need---NIPS2017\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03762.pdf) 的 PyTorch 实现\n\n- [\"Squeeze-and-Excitation Networks---CVPR2018\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01507) 的 PyTorch 实现\n\n- [\"Selective Kernel Networks---CVPR2019\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.06586.pdf) 的 PyTorch 实现\n\n- [\"CBAM: 卷积块注意力模块---ECCV2018\"](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FSanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.pdf) 的 PyTorch 实现\n\n- [\"BAM: 瓶颈注意力模块---BMCV2018\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06514.pdf) 的 PyTorch 实现\n\n- [\"ECA-Net: 针对深度卷积神经网络的高效通道注意力---CVPR2020\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.03151.pdf) 的 PyTorch 实现\n\n- [\"场景分割的双注意力网络---CVPR2019\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.02983.pdf) 的 PyTorch 实现\n\n- [\"EPSANet: 卷积神经网络上的高效金字塔分裂注意力模块---arXiv 2021.05.30\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14447.pdf) 的 PyTorch 实现\n\n- [\"ResT: 用于视觉识别的高效Transformer---arXiv 2021.05.28\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.13677) 的 PyTorch 实现\n\n- [\"SA-NET: 用于深度卷积神经网络的SHUFFLE注意力---ICASSP 2021\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.00240.pdf) 的 PyTorch 实现\n\n- [\"MUSE: 用于序列到序列学习的并行多尺度注意力---arXiv 2019.11.17\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.09483) 的 PyTorch 实现\n\n- [\"空间分组增强：改进卷积网络中的语义特征学习---arXiv 2019.05.23\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.09646.pdf) 的 PyTorch 实现\n\n- [\"A2-Nets: 双重注意力网络---NIPS2018\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11579.pdf) 的 PyTorch 实现\n\n\n- [\"无注意力Transformer---ICLR2021 (Apple 新作)\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14103v1.pdf) 的 PyTorch 实现\n\n\n- [\"VOLO: 视觉观察者用于视觉识别---arXiv 2021.06.24\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112) 的 PyTorch 实现 \n  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F385561050)\n\n\n- [\"Vision Permutator: 一种可置换的类MLP架构用于视觉识别---arXiv 2021.06.23\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12368) 的 PyTorch 实现 \n  [【论文解析】](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002F5gonUQgBho_m2O54jyXF_Q)\n\n\n- [\"CoAtNet: 将卷积与注意力结合以适应所有数据规模---arXiv 2021.06.09\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04803) 的 PyTorch 实现 \n  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F385578588)\n\n\n- [\"用于参数高效的视觉骨干网络的局部自注意力缩放---CVPR2021 口头报告\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.12731.pdf) 的 PyTorch 实现  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F388598744)\n\n\n\n- [\"极化自注意力：迈向高质量像素级回归---arXiv 2021.07.02\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00782) 的 PyTorch 实现  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F389770482) \n\n\n- [\"用于视觉识别的上下文Transformer网络---arXiv 2021.07.26\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12292) 的 PyTorch 实现  [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F394795481) \n\n\n- [\"残差注意力：一种简单但有效的多标签识别方法---ICCV2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02456) 的 PyTorch 实现 \n\n- [\"S²-MLPv2: 改进的空间移位MLP架构用于视觉---arXiv 2021.08.02\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.01072) 的 PyTorch 实现 [【论文解析】](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F397003638) \n\n- [\"用于图像分类的全局滤波网络---arXiv 2021.07.01\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00645) 的 PyTorch 实现 \n\n- [\"旋转以关注：卷积三元注意力模块---WACV 2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03045) 的 PyTorch 实现 \n\n- [\"用于高效移动网络设计的坐标注意力---CVPR 2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907) 的 PyTorch 实现\n\n- [\"MobileViT: 轻量级、通用且适合移动端的视觉Transformer---ArXiv 2021.10.05\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.02178) 的 PyTorch 实现\n\n- [\"非深度网络---ArXiv 2021.10.20\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07641) 的 PyTorch 实现\n\n- [\"UFO-ViT: 不使用Softmax的高性能线性视觉Transformer---ArXiv 2021.09.29\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.14382) 的 PyTorch 实现\n\n- [\"适用于移动视觉Transformer的可分离自注意力---ArXiv 2022.06.06\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02680) 的 PyTorch 实现\n\n- [\"关于自注意力与卷积融合的研究---ArXiv 2022.03.14\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14556.pdf) 的 PyTorch 实现\n\n- [\"CROSSFORMER: 一种基于跨尺度注意力的多功能视觉Transformer---ICLR 2022\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.00154.pdf) 的 PyTorch 实现\n\n- [\"将全局特征聚合到局部视觉Transformer中\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12903) 的 PyTorch 实现\n\n- [\"CCNet: 用于语义分割的十字交叉注意力\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11721) 的 PyTorch 实现\n\n- [\"多维Transformer中的轴向注意力\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.12180) 的 PyTorch 实现\n***\n\n\n### 1. 外部注意力使用\n#### 1.1. 论文\n[\"超越自注意力：用于视觉任务的双线性层外部注意力\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.02358)\n\n#### 1.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ab18545bd81d.png)\n\n#### 1.3. 使用代码\n```python\nfrom model.attention.ExternalAttention import ExternalAttention\nimport torch\n\ninput=torch.randn(50,49,512)\nea = ExternalAttention(d_model=512,S=8)\noutput=ea(input)\nprint(output.shape)\n```\n\n***\n\n\n### 2. 自注意力使用\n#### 2.1. 论文\n[\"Attention Is All You Need\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03762.pdf)\n\n#### 1.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_44262dd769da.png)\n\n#### 1.3. 使用代码\n```python\nfrom model.attention.SelfAttention import ScaledDotProductAttention\nimport torch\n\ninput=torch.randn(50,49,512)\nsa = ScaledDotProductAttention(d_model=512, d_k=512, d_v=512, h=8)\noutput=sa(input,input,input)\nprint(output.shape)\n```\n\n***\n\n### 3. 简化自注意力使用\n#### 3.1. 论文\n[无]()\n\n#### 3.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_d0d96ce0a6c7.png)\n\n#### 3.3. 使用代码\n```python\nfrom model.attention.SimplifiedSelfAttention import SimplifiedScaledDotProductAttention\nimport torch\n\ninput=torch.randn(50,49,512)\nssa = SimplifiedScaledDotProductAttention(d_model=512, h=8)\noutput=ssa(input,input,input)\nprint(output.shape)\n\n```\n\n***\n\n### 4. 挤压与激励注意力使用方法\n#### 4.1. 论文\n[\"挤压与激励网络\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01507)\n\n#### 4.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_88e209a50982.png)\n\n#### 4.3. 使用代码\n```python\nfrom model.attention.SEAttention import SEAttention\nimport torch\n\ninput=torch.randn(50,512,7,7)\nse = SEAttention(channel=512,reduction=8)\noutput=se(input)\nprint(output.shape)\n\n```\n\n***\n\n### 5. SK注意力使用方法\n#### 5.1. 论文\n[\"选择性卷积核网络\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.06586.pdf)\n\n#### 5.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ceb5b480d397.png)\n\n#### 5.3. 使用代码\n```python\nfrom model.attention.SKAttention import SKAttention\nimport torch\n\ninput=torch.randn(50,512,7,7)\nse = SKAttention(channel=512,reduction=8)\noutput=se(input)\nprint(output.shape)\n\n```\n***\n\n### 6. CBAM注意力使用方法\n#### 6.1. 论文\n[\"CBAM：卷积块注意力模块\"](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FSanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.pdf)\n\n#### 6.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_8d5dded2ddca.png)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_330d7cb36174.png)\n\n#### 6.3. 使用代码\n```python\nfrom model.attention.CBAM import CBAMBlock\nimport torch\n\ninput=torch.randn(50,512,7,7)\nkernel_size=input.shape[2]\ncbam = CBAMBlock(channel=512,reduction=16,kernel_size=kernel_size)\noutput=cbam(input)\nprint(output.shape)\n\n```\n\n***\n\n### 7. BAM注意力使用方法\n#### 7.1. 论文\n[\"BAM：瓶颈注意力模块\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.06514.pdf)\n\n#### 7.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_a7fa1aac9d36.png)\n\n#### 7.3. 使用代码\n```python\nfrom model.attention.BAM import BAMBlock\nimport torch\n\ninput=torch.randn(50,512,7,7)\nbam = BAMBlock(channel=512,reduction=16,dia_val=2)\noutput=bam(input)\nprint(output.shape)\n\n```\n\n***\n\n### 8. ECA注意力使用方法\n#### 8.1. 论文\n[\"ECA-Net：深度卷积神经网络的高效通道注意力\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.03151.pdf)\n\n#### 8.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_27640d0e5c4d.png)\n\n#### 8.3. 使用代码\n```python\nfrom model.attention.ECAAttention import ECAAttention\nimport torch\n\ninput=torch.randn(50,512,7,7)\neca = ECAAttention(kernel_size=3)\noutput=eca(input)\nprint(output.shape)\n\n```\n\n***\n\n### 9. DANet注意力使用方法\n#### 9.1. 论文\n[\"用于场景分割的双注意力网络\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.02983.pdf)\n\n#### 9.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_c273ac0ded35.png)\n\n#### 9.3. 使用代码\n```python\nfrom model.attention.DANet import DAModule\nimport torch\n\ninput=torch.randn(50,512,7,7)\ndanet=DAModule(d_model=512,kernel_size=3,H=7,W=7)\nprint(danet(input).shape)\n\n```\n\n***\n\n### 10. 金字塔分割注意力使用方法\n\n#### 10.1. 论文\n[\"EPSANet：卷积神经网络上的高效金字塔分割注意力模块\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14447.pdf)\n\n#### 10.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_86fae2eb3db5.png)\n\n#### 10.3. 使用代码\n```python\nfrom model.attention.PSA import PSA\nimport torch\n\ninput=torch.randn(50,512,7,7)\npsa = PSA(channel=512,reduction=8)\noutput=psa(input)\nprint(output.shape)\n\n```\n\n***\n\n\n### 11. 高效多头自注意力使用方法\n\n#### 11.1. 论文\n[\"ResT：一种用于视觉识别的高效Transformer\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.13677)\n\n#### 11.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_884191f141c7.png)\n\n#### 11.3. 使用代码\n```python\n\nfrom model.attention.EMSA import EMSA\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,64,512)\nemsa = EMSA(d_model=512, d_k=512, d_v=512, h=8,H=8,W=8,ratio=2,apply_transform=True)\noutput=emsa(input,input,input)\nprint(output.shape)\n    \n```\n\n***\n\n\n### 12. 洗牌注意力使用方法\n\n#### 12.1. 论文\n[\"SA-NET：深度卷积神经网络的洗牌注意力\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.00240.pdf)\n\n#### 12.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_75161322f3b9.jpg)\n\n#### 12.3. 使用代码\n```python\n\nfrom model.attention.ShuffleAttention import ShuffleAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\ninput=torch.randn(50,512,7,7)\nse = ShuffleAttention(channel=512,G=8)\noutput=se(input)\nprint(output.shape)\n\n    \n```\n\n\n***\n\n\n### 13. MUSE注意力使用方法\n\n#### 13.1. 论文\n[\"MUSE：用于序列到序列学习的并行多尺度注意力\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.09483)\n\n#### 13.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_e54a72841fa9.png)\n\n#### 13.3. 使用代码\n```python\nfrom model.attention.MUSEAttention import MUSEAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\ninput=torch.randn(50,49,512)\nsa = MUSEAttention(d_model=512, d_k=512, d_v=512, h=8)\noutput=sa(input,input,input)\nprint(output.shape)\n\n```\n\n***\n\n\n### 14. SGE注意力使用方法\n\n#### 14.1. 论文\n[空间分组增强：改进卷积网络中的语义特征学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.09646.pdf)\n\n#### 14.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_35313e77ea8b.png)\n\n#### 14.3. 使用代码\n```python\nfrom model.attention.SGE import SpatialGroupEnhance\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\nsge = SpatialGroupEnhance(groups=8)\noutput=sge(input)\nprint(output.shape)\n\n```\n\n***\n\n\n### 15. A2注意力使用方法\n\n#### 15.1. 论文\n[A2-Nets：双重注意力网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1810.11579.pdf)\n\n#### 15.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_ee4fac013777.png)\n\n#### 15.3. 使用代码\n```python\nfrom model.attention.A2Atttention import DoubleAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\na2 = DoubleAttention(512,128,128,True)\noutput=a2(input)\nprint(output.shape)\n\n```\n\n\n\n### 16. AFT注意力使用方法\n\n#### 16.1. 论文\n[无注意力Transformer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.14103v1.pdf)\n\n#### 16.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_6da7a23607ec.jpg)\n\n#### 16.3. 使用代码\n```python\nfrom model.attention.AFT import AFT_FULL\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,49,512)\naft_full = AFT_FULL(d_model=512, n=49)\noutput=aft_full(input)\nprint(output.shape)\n\n```\n\n\n\n\n\n\n### 17. Outlook注意力使用方法\n\n#### 17.1. 论文\n\n\n[VOLO：用于视觉识别的视觉展望者\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112)\n\n\n#### 17.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_51af1ff37183.png)\n\n#### 17.3. 使用代码\n```python\nfrom model.attention.OutlookAttention import OutlookAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,28,28,512)\noutlook = OutlookAttention(dim=512)\noutput=outlook(input)\nprint(output.shape)\n\n```\n\n\n***\n\n\n\n\n\n\n### 18. ViP注意力使用方法\n\n#### 18.1. 论文\n\n\n[Vision Permutator：一种用于视觉识别的可置换MLP-like架构\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12368)\n\n\n#### 18.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_eba677a6c7b5.png)\n\n#### 18.3. 使用代码\n```python\n\nfrom model.attention.ViP import WeightedPermuteMLP\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(64,8,8,512)\nseg_dim=8\nvip=WeightedPermuteMLP(512,seg_dim)\nout=vip(input)\nprint(out.shape)\n\n```\n\n\n***\n\n### 19. CoAtNet 注意力使用\n\n#### 19.1. 论文\n\n\n[CoAtNet: 将卷积与注意力结合以适应所有数据规模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.04803) \n\n\n#### 19.2. 概述\n无\n\n\n#### 19.3. 使用代码\n```python\n\nfrom model.attention.CoAtNet import CoAtNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,3,224,224)\nmbconv=CoAtNet(in_ch=3,image_size=224)\nout=mbconv(input)\nprint(out.shape)\n\n```\n\n\n***\n\n\n\n\n\n\n### 20. HaloNet 注意力使用\n\n#### 20.1. 论文\n\n\n[用于参数高效视觉骨干网络的局部自注意力扩展](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.12731.pdf) \n\n\n#### 20.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_d97769fd6847.png)\n\n#### 20.3. 使用代码\n```python\n\nfrom model.attention.HaloAttention import HaloAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,512,8,8)\nhalo = HaloAttention(dim=512,\n    block_size=2,\n    halo_size=1,)\noutput=halo(input)\nprint(output.shape)\n\n```\n\n\n***\n\n### 21. 极化自注意力使用\n\n#### 21.1. 论文\n\n[极化自注意力：迈向高质量的逐像素回归](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00782)  \n\n\n#### 21.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_c78992e6d3d5.png)\n\n#### 21.3. 使用代码\n```python\n\nfrom model.attention.PolarizedSelfAttention import ParallelPolarizedSelfAttention,SequentialPolarizedSelfAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,512,7,7)\npsa = SequentialPolarizedSelfAttention(channel=512)\noutput=psa(input)\nprint(output.shape)\n\n\n```\n\n\n***\n\n\n### 22. CoT注意力使用\n\n#### 22.1. 论文\n\n[用于视觉识别的上下文Transformer网络---arXiv 2021年7月26日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.12292) \n\n\n#### 22.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b9f9b989b30b.png)\n\n#### 22.3. 使用代码\n```python\n\nfrom model.attention.CoTAttention import CoTAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\ncot = CoTAttention(dim=512,kernel_size=3)\noutput=cot(input)\nprint(output.shape)\n\n\n\n```\n\n***\n\n\n### 23. 残差注意力使用\n\n#### 23.1. 论文\n\n[残差注意力：一种简单但有效的多标签识别方法---ICCV2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.02456) \n\n\n#### 23.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_4390afb38c44.png)\n\n#### 23.3. 使用代码\n```python\n\nfrom model.attention.ResidualAttention import ResidualAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\nresatt = ResidualAttention(channel=512,num_class=1000,la=0.2)\noutput=resatt(input)\nprint(output.shape)\n\n\n\n```\n\n***\n\n\n\n### 24. S2注意力使用\n\n#### 24.1. 论文\n\n[S²-MLPv2：改进的空间移位MLP视觉架构---arXiv 2021年8月2日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.01072) \n\n\n#### 24.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_0c47cd693dcf.png)\n\n#### 24.3. 使用代码\n```python\nfrom model.attention.S2Attention import S2Attention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(50,512,7,7)\ns2att = S2Attention(channels=512)\noutput=s2att(input)\nprint(output.shape)\n\n```\n\n***\n\n\n\n### 25. GFNet 注意力使用\n\n#### 25.1. 论文\n\n[用于图像分类的全局滤波器网络---arXiv 2021年7月1日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.00645) \n\n\n#### 25.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_e5055ead1bc9.jpg)\n\n#### 25.3. 使用代码 - 由 [Wenliang Zhao (作者)](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=lyPWvuEAAAAJ&hl=en) 实现\n\n```python\nfrom model.attention.gfnet import GFNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nx = torch.randn(1, 3, 224, 224)\ngfnet = GFNet(embed_dim=384, img_size=224, patch_size=16, num_classes=1000)\nout = gfnet(x)\nprint(out.shape)\n\n```\n\n***\n\n\n### 26. 三元组注意力使用\n\n#### 26.1. 论文\n\n[旋转以关注：卷积三元组注意力模块---CVPR 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.03045) \n\n#### 26.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b70ec45201f6.png)\n\n#### 26.3. 使用代码 - 由 [digantamisra98](https:\u002F\u002Fgithub.com\u002Fdigantamisra98) 实现\n\n```python\nfrom model.attention.TripletAttention import TripletAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\ninput=torch.randn(50,512,7,7)\ntriplet = TripletAttention()\noutput=triplet(input)\nprint(output.shape)\n```\n\n\n***\n\n\n### 27. 坐标注意力使用\n\n#### 27.1. 论文\n\n[用于高效移动网络设计的坐标注意力---CVPR 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n\n#### 27.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_630691aa3c11.png)\n\n#### 27.3. 使用代码 - 由 [Andrew-Qibin](https:\u002F\u002Fgithub.com\u002FAndrew-Qibin) 实现\n\n```python\nfrom model.attention.CoordAttention import CoordAtt\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninp=torch.rand([2, 96, 56, 56])\ninp_dim, oup_dim = 96, 96\nreduction=32\n\ncoord_attention = CoordAtt(inp_dim, oup_dim, reduction=reduction)\noutput=coord_attention(inp)\nprint(output.shape)\n```\n\n***\n\n\n### 28. MobileViT 注意力使用\n\n#### 28.1. 论文\n\n[MobileViT：轻量级、通用且适合移动端的视觉Transformer---ArXiv 2021年10月5日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n\n#### 28.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_7eb9538809da.png)\n\n#### 28.3. 使用代码\n\n```python\nfrom model.attention.MobileViTAttention import MobileViTAttention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    m=MobileViTAttention()\n    input=torch.randn(1,3,49,49)\n    output=m(input)\n    print(output.shape)  #输出:(1,3,49,49)\n    \n```\n\n***\n\n\n### 29. ParNet 注意力使用\n\n#### 29.1. 论文\n\n[非深度网络---ArXiv 2021年10月20日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07641)\n\n\n#### 29.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_361866667326.png)\n\n#### 29.3. 使用代码\n\n```python\nfrom model.attention.ParNetAttention import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,512,7,7)\n    pna = ParNetAttention(channel=512)\n    output=pna(input)\n    print(output.shape) #50,512,7,7\n    \n```\n\n***\n\n\n### 30. UFO 注意力使用\n\n#### 30.1. 论文\n\n[UFO-ViT：高性能线性视觉Transformer，无需Softmax---ArXiv 2021年9月29日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.07641)\n\n\n#### 30.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_51620dd3b70e.png)\n\n#### 30.3. 使用代码\n\n```python\nfrom model.attention.UFOAttention import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    ufo = UFOAttention(d_model=512, d_k=512, d_v=512, h=8)\n    output=ufo(input,input,input)\n    print(output.shape) #[50, 49, 512]\n    \n```\n\n-\n\n### 31. ACmix 注意力使用\n\n#### 31.1. 论文\n\n[关于自注意力与卷积的融合](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.14556.pdf)\n\n#### 31.2. 使用代码\n\n```python\nfrom model.attention.ACmix import ACmix\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(50,256,7,7)\n    acmix = ACmix(in_planes=256, out_planes=256)\n    output=acmix(input)\n    print(output.shape)\n    \n```\n\n### 32. MobileViTv2 注意力使用方法\n\n#### 32.1. 论文\n\n[用于移动端视觉Transformer的可分离自注意力---ArXiv 2022年6月6日](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02680)\n\n\n#### 32.2. 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_f5b3b9eec5e9.png)\n\n#### 32.3. 使用代码\n\n```python\nfrom model.attention.MobileViTv2Attention import MobileViTv2Attention\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,49,512)\n    sa = MobileViTv2Attention(d_model=512)\n    output=sa(input)\n    print(output.shape)\n    \n```\n\n### 33. DAT 注意力使用方法\n\n#### 33.1. 论文\n\n[带有可变形注意力的视觉Transformer---CVPR2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.00520)\n\n#### 33.2. 使用代码\n\n```python\nfrom model.attention.DAT import DAT\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = DAT(\n        img_size=224,\n        patch_size=4,\n        num_classes=1000,\n        expansion=4,\n        dim_stem=96,\n        dims=[96, 192, 384, 768],\n        depths=[2, 2, 6, 2],\n        stage_spec=[['L', 'S'], ['L', 'S'], ['L', 'D', 'L', 'D', 'L', 'D'], ['L', 'D']],\n        heads=[3, 6, 12, 24],\n        window_sizes=[7, 7, 7, 7] ,\n        groups=[-1, -1, 3, 6],\n        use_pes=[False, False, True, True],\n        dwc_pes=[False, False, False, False],\n        strides=[-1, -1, 1, 1],\n        sr_ratios=[-1, -1, -1, -1],\n        offset_range_factor=[-1, -1, 2, 2],\n        no_offs=[False, False, False, False],\n        fixed_pes=[False, False, False, False],\n        use_dwc_mlps=[False, False, False, False],\n        use_conv_patches=False,\n        drop_rate=0.0,\n        attn_drop_rate=0.0,\n        drop_path_rate=0.2,\n    )\n    output=model(input)\n    print(output[0].shape)\n    \n```\n\n### 34. CrossFormer 注意力使用方法\n\n#### 34.1. 论文\n\n[CROSSFORMER：一种基于跨尺度注意力的多功能视觉Transformer---ICLR 2022](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.00154.pdf)\n\n#### 34.2. 使用代码\n\n```python\nfrom model.attention.Crossformer import CrossFormer\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CrossFormer(img_size=224,\n        patch_size=[4, 8, 16, 32],\n        in_chans= 3,\n        num_classes=1000,\n        embed_dim=48,\n        depths=[2, 2, 6, 2],\n        num_heads=[3, 6, 12, 24],\n        group_size=[7, 7, 7, 7],\n        mlp_ratio=4.,\n        qkv_bias=True,\n        qk_scale=None,\n        drop_rate=0.0,\n        drop_path_rate=0.1,\n        ape=False,\n        patch_norm=True,\n        use_checkpoint=False,\n        merge_size=[[2, 4], [2,4], [2, 4]]\n    )\n    output=model(input)\n    print(output.shape)\n    \n```\n\n### 35. MOATransformer 注意力使用方法\n\n#### 35.1. 论文\n\n[将全局特征聚合到局部视觉Transformer中](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12903)\n\n#### 35.2. 使用代码\n\n```python\nfrom model.attention.MOATransformer import MOATransformer\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = MOATransformer(\n        img_size=224,\n        patch_size=4,\n        in_chans=3,\n        num_classes=1000,\n        embed_dim=96,\n        depths=[2, 2, 6],\n        num_heads=[3, 6, 12],\n        window_size=14,\n        mlp_ratio=4.,\n        qkv_bias=True,\n        qk_scale=None,\n        drop_rate=0.0,\n        drop_path_rate=0.1,\n        ape=False,\n        patch_norm=True,\n        use_checkpoint=False\n    )\n    output=model(input)\n    print(output.shape)\n    \n```\n\n### 36. CrissCrossAttention 注意力使用方法\n\n#### 36.1. 论文\n\n[CCNet：用于语义分割的十字交叉注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11721)\n\n#### 36.2. 使用代码\n\n```python\nfrom model.attention.CrissCrossAttention import CrissCrossAttention\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(3, 64, 7, 7)\n    model = CrissCrossAttention(64)\n    outputs = model(input)\n    print(outputs.shape)\n    \n```\n\n### 37. Axial_attention 注意力使用方法\n\n#### 37.1. 论文\n\n[多维Transformer中的轴向注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.12180)\n\n#### 37.2. 使用代码\n\n```python\nfrom model.attention.Axial_attention import AxialImageTransformer\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(3, 128, 7, 7)\n    model = AxialImageTransformer(\n        dim = 128,\n        depth = 12,\n        reversible = True\n    )\n    outputs = model(input)\n    print(outputs.shape)\n    \n```\n\n***\n\n# 骨干网络系列\n\n- PyTorch 实现的 [\"Deep Residual Learning for Image Recognition---CVPR2016 最佳论文\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.03385.pdf)\n\n- PyTorch 实现的 [\"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431v2)\n\n- PyTorch 实现的 [MobileViT: 轻量级、通用且移动端友好的视觉 Transformer---ArXiv 2020.10.05](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n- PyTorch 实现的 [Patches Are All You Need?---ICLR2022 (审稿中)](https:\u002F\u002Fopenreview.net\u002Fforum?id=TVHS5Y4dNvM)\n\n- PyTorch 实现的 [Shuffle Transformer: 重新思考视觉 Transformer 的空间洗牌操作---ArXiv 2021.06.07](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03650)\n\n- PyTorch 实现的 [ConTNet: 为什么不在同一模型中同时使用卷积和 Transformer？---ArXiv 2021.04.27](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13497)\n\n- PyTorch 实现的 [具有层次化注意力的视觉 Transformer---ArXiv 2022.06.15](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03180)\n\n- PyTorch 实现的 [Co-Scale Conv-Attentional Image Transformers---ArXiv 2021.08.26](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06399)\n\n- PyTorch 实现的 [用于视觉 Transformer 的条件位置编码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10882)\n\n- PyTorch 实现的 [重新思考视觉 Transformer 的空间维度---ICCV 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16302)\n\n- PyTorch 实现的 [CrossViT: 用于图像分类的跨注意力多尺度视觉 Transformer---ICCV 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14899)\n\n- PyTorch 实现的 [Transformer in Transformer---NeurIPS 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00112)\n\n- PyTorch 实现的 [DeepViT: 向更深层的视觉 Transformer 发展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11886)\n\n- PyTorch 实现的 [将卷积设计融入视觉 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11816)\n***\n\n- PyTorch 实现的 [ConViT: 通过软卷积归纳偏置改进视觉 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.10697)\n\n- PyTorch 实现的 [基于注意力聚合增强卷积网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.13692)\n\n- PyTorch 实现的 [让图像 Transformer 更深——ICCV 2021 (口头报告)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.17239)\n\n- PyTorch 实现的 [数据高效训练的图像 Transformer 及通过注意力进行蒸馏---ICML 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.12877)\n\n- PyTorch 实现的 [LeViT: 以卷积网络之形实现更快推理的视觉 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01136)\n\n- PyTorch 实现的 [VOLO: 用于视觉识别的 Vision Outlooker](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112)\n\n- PyTorch 实现的 [Container: 上下文聚合网络---NeuIPS 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01401)\n\n- PyTorch 实现的 [CMT: 卷积神经网络与视觉 Transformer 的结合---CVPR 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06263)\n\n- PyTorch 实现的 [具有可变形注意力的视觉 Transformer---CVPR 2022](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.00520)\n\n- PyTorch 实现的 [EfficientFormer: 以 MobileNet 速度运行的视觉 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01191)\n\n- PyTorch 实现的 [ConvNeXtV2: 将卷积网络与掩码自编码器协同设计并扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00808)\n\n\n### 1. ResNet 使用\n#### 1.1. 论文\n[\"Deep Residual Learning for Image Recognition---CVPR2016 最佳论文\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1512.03385.pdf)\n\n#### 1.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_59a77a1e2933.png)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_6c10523cc884.jpg)\n\n#### 1.3. 使用代码\n```python\n\nfrom model.backbone.resnet import ResNet50,ResNet101,ResNet152\nimport torch\nif __name__ == '__main__':\n    input=torch.randn(50,3,224,224)\n    resnet50=ResNet50(1000)\n    # resnet101=ResNet101(1000)\n    # resnet152=ResNet152(1000)\n    out=resnet50(input)\n    print(out.shape)\n\n```\n\n\n### 2. ResNeXt 使用\n#### 2.1. 论文\n\n[\"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05431v2)\n\n#### 2.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_931531bc607a.png)\n\n#### 2.3. 使用代码\n```python\n\nfrom model.backbone.resnext import ResNeXt50,ResNeXt101,ResNeXt152\nimport torch\n\nif __name__ == '__main__':\n    input=torch.randn(50,3,224,224)\n    resnext50=ResNeXt50(1000)\n    # resnext101=ResNeXt101(1000)\n    # resnext152=ResNeXt152(1000)\n    out=resnext50(input)\n    print(out.shape)\n\n\n```\n\n\n\n### 3. MobileViT 使用\n#### 3.1. 论文\n\n[MobileViT: 轻量级、通用且移动端友好的视觉 Transformer---ArXiv 2020.10.05](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.02907)\n\n#### 3.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_8e7e4dcf8f1f.jpg)\n\n#### 3.3. 使用代码\n```python\n\nfrom model.backbone.MobileViT import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n\n    ### mobilevit_xxs\n    mvit_xxs=mobilevit_xxs()\n    out=mvit_xxs(input)\n    print(out.shape)\n\n    ### mobilevit_xs\n    mvit_xs=mobilevit_xs()\n    out=mvit_xs(input)\n    print(out.shape)\n\n\n    ### mobilevit_s\n    mvit_s=mobilevit_s()\n    out=mvit_s(input)\n    print(out.shape)\n\n```\n\n\n\n\n\n### 4. ConvMixer 使用\n#### 4.1. 论文\n[Patches Are All You Need?---ICLR2022 (审稿中)](https:\u002F\u002Fopenreview.net\u002Fforum?id=TVHS5Y4dNvM)\n#### 4.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_12633cadd598.png)\n\n#### 4.3. 使用代码\n```python\n\nfrom model.backbone.ConvMixer import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    x=torch.randn(1,3,224,224)\n    convmixer=ConvMixer(dim=512,depth=12)\n    out=convmixer(x)\n    print(out.shape)  #[1, 1000]\n\n\n```\n\n### 5. ShuffleTransformer 使用\n#### 5.1. 论文\n[Shuffle Transformer: 重新思考视觉 Transformer 的空间洗牌操作](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.03650.pdf)\n\n#### 5.2. 使用代码\n```python\n\nfrom model.backbone.ShuffleTransformer import ShuffleTransformer\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    sft = ShuffleTransformer()\n    output=sft(input)\n    print(output.shape)\n\n\n```\n\n### 6. ConTNet 使用\n#### 6.1. 论文\n[ConTNet: 为什么不在同一模型中同时使用卷积和 Transformer？](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13497)\n\n#### 6.2. 使用代码\n```python\n\nfrom model.backbone.ConTNet import ConTNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == \"__main__\":\n    model = build_model(use_avgdown=True, relative=True, qkv_bias=True, pre_norm=True)\n    input = torch.randn(1, 3, 224, 224)\n    out = model(input)\n    print(out.shape)\n\n\n```\n\n### 7 HATNet 使用方法\n#### 7.1. 论文\n[具有层次化注意力的视觉Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.03180)\n\n#### 7.2. 使用代码\n```python\n\nfrom model.backbone.HATNet import HATNet\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    hat = HATNet(dims=[48, 96, 240, 384], head_dim=48, expansions=[8, 8, 4, 4],\n        grid_sizes=[8, 7, 7, 1], ds_ratios=[8, 4, 2, 1], depths=[2, 2, 6, 3])\n    output=hat(input)\n    print(output.shape)\n\n\n```\n\n### 8 CoaT 使用方法\n#### 8.1. 论文\n[协同尺度卷积-注意力图像Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.06399)\n\n#### 8.2. 使用代码\n```python\n\nfrom model.backbone.CoaT import CoaT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CoaT(patch_size=4, embed_dims=[152, 152, 152, 152], serial_depths=[2, 2, 2, 2], parallel_depth=6, num_heads=8, mlp_ratios=[4, 4, 4, 4])\n    output=model(input)\n    print(output.shape) # torch.Size([1, 1000])\n\n```\n\n### 9 PVT 使用方法\n#### 9.1. 论文\n[PVT v2：基于金字塔视觉Transformer的改进基线](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.13797.pdf)\n\n#### 9.2. 使用代码\n```python\n\nfrom model.backbone.PVT import PyramidVisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = PyramidVisionTransformer(\n        patch_size=4, embed_dims=[64, 128, 320, 512], num_heads=[1, 2, 5, 8], mlp_ratios=[8, 8, 4, 4], qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[2, 2, 2, 2], sr_ratios=[8, 4, 2, 1])\n    output=model(input)\n    print(output.shape)\n\n```\n\n\n### 10 CPVT 使用方法\n#### 10.1. 论文\n[用于视觉Transformer的条件位置编码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.10882)\n\n#### 10.2. 使用代码\n```python\n\nfrom model.backbone.CPVT import CPVTV2\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CPVTV2(\n        patch_size=4, embed_dims=[64, 128, 320, 512], num_heads=[1, 2, 5, 8], mlp_ratios=[8, 8, 4, 4], qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6), depths=[3, 4, 6, 3], sr_ratios=[8, 4, 2, 1])\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 11 PIT 使用方法\n#### 11.1. 论文\n[重新思考视觉Transformer的空间维度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.16302)\n\n#### 11.2. 使用代码\n```python\n\nfrom model.backbone.PIT import PoolingTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = PoolingTransformer(\n        image_size=224,\n        patch_size=14,\n        stride=7,\n        base_dims=[64, 64, 64],\n        depth=[3, 6, 4],\n        heads=[4, 8, 16],\n        mlp_ratio=4\n    )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 12 CrossViT 使用方法\n#### 12.1. 论文\n[CrossViT：用于图像分类的交叉注意力多尺度视觉Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14899)\n\n#### 12.2. 使用代码\n```python\n\nfrom model.backbone.CrossViT import VisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == \"__main__\":\n    input=torch.randn(1,3,224,224)\n    model = VisionTransformer(\n        img_size=[240, 224],\n        patch_size=[12, 16], \n        embed_dim=[192, 384], \n        depth=[[1, 4, 0], [1, 4, 0], [1, 4, 0]],\n        num_heads=[6, 6], \n        mlp_ratio=[4, 4, 1], \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6)\n    )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 13 TnT 使用方法\n#### 13.1. 论文\n[Transformer in Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.00112)\n\n#### 13.2. 使用代码\n```python\n\nfrom model.backbone.TnT import TNT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = TNT(\n        img_size=224, \n        patch_size=16, \n        outer_dim=384, \n        inner_dim=24, \n        depth=12,\n        outer_num_heads=6, \n        inner_num_heads=4, \n        qkv_bias=False,\n        inner_stride=4)\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 14 DViT 使用方法\n#### 14.1. 论文\n[DeepViT：迈向更深层的视觉Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11886)\n\n#### 14.2. 使用代码\n```python\n\nfrom model.backbone.DViT import DeepVisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = DeepVisionTransformer(\n        patch_size=16, embed_dim=384, \n        depth=[False] * 16, \n        apply_transform=[False] * 0 + [True] * 32, \n        num_heads=12, \n        mlp_ratio=3, \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6),\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 15 CeiT 使用方法\n#### 15.1. 论文\n[将卷积设计融入视觉Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.11816)\n\n#### 15.2. 使用代码\n```python\n\nfrom model.backbone.CeiT import CeIT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CeIT(\n        hybrid_backbone=Image2Tokens(),\n        patch_size=4, \n        embed_dim=192, \n        depth=12, \n        num_heads=3, \n        mlp_ratio=4, \n        qkv_bias=True,\n        norm layer=partial(nn.LayerNorm, eps=1e-6)\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 16 ConViT 使用方法\n#### 16.1. 论文\n[ConViT：通过软卷积归纳偏置改进视觉Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.10697)\n\n#### 16.2. 使用代码\n```python\n\nfrom model.backbone.ConViT import VisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VisionTransformer(\n        num_heads=16,\n        norm layer=partial(nn.LayerNorm, eps=1e-6)\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 17 CaiT 使用方法\n#### 17.1. 论文\n[深入研究图像Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.17239)\n\n#### 17.2. 使用代码\n```python\n\nfrom model.backbone.CaiT import CaiT\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CaiT(\n        img_size= 224,\n        patch_size=16, \n        embed_dim=192, \n        depth=24, \n        num_heads=4, \n        mlp ratio=4, \n        qkv bias=True,\n        norm layer=partial(nn.LayerNorm, eps=1e-6),\n        init scale=1e-5,\n        depth token only=2\n        )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 18 PatchConvnet 使用方法\n#### 18.1. 论文\n[通过基于注意力的聚合增强卷积网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.13692)\n\n#### 18.2. 使用代码\n```python\n\nfrom model.backbone.PatchConvnet import PatchConvnet\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = PatchConvnet(\n        patch_size=16,\n        embed_dim=384,\n        depth=60,\n        num_heads=1,\n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6),\n        Patch_layer=ConvStem,\n        Attention_block=Conv_blocks_se,\n        depth_token_only=1,\n        mlp_ratio_clstk=3.0,\n    )\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 19 DeiT 使用方法\n#### 19.1. 论文\n[高效训练图像Transformer及通过注意力进行蒸馏](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.12877)\n\n#### 19.2. 使用代码\n```python\n\nfrom model.backbone.DeiT import DistilledVisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = DistilledVisionTransformer(\n        patch_size=16, \n        embed_dim=384, \n        depth=12, \n        num_heads=6, \n        mlp_ratio=4, \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6)\n        )\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 20 LeViT 使用方法\n#### 20.1. 论文\n[LeViT：以卷积神经网络形式实现的视觉Transformer，用于更快速的推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01136)\n\n#### 20.2. 使用代码\n```python\n\nfrom model.backbone.LeViT import *\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    for name in specification:\n        input=torch.randn(1,3,224,224)\n        model = globals()[name](fuse=True, pretrained=False)\n        model.eval()\n        output = model(input)\n        print(output.shape)\n\n```\n\n### 21 VOLO 使用方法\n#### 21.1. 论文\n[VOLO：用于视觉识别的视觉观察者](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13112)\n\n#### 21.2. 使用代码\n```python\n\nfrom model.backbone.VOLO import VOLO\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VOLO([4, 4, 8, 2],\n                 embed_dims=[192, 384, 384, 384],\n                 num_heads=[6, 12, 12, 12],\n                 mlp_ratios=[3, 3, 3, 3],\n                 downsamples=[True, False, False, False],\n                 outlook_attention=[True, False, False, False ],\n                 post_layers=['ca', 'ca'],\n                 )\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 22 Container 使用方法\n#### 22.1. 论文\n[Container：上下文聚合网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01401)\n\n#### 22.2. 使用代码\n```python\n\nfrom model.backbone.Container import VisionTransformer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VisionTransformer(\n        img_size=[224, 56, 28, 14], \n        patch_size=[4, 2, 2, 2], \n        embed_dim=[64, 128, 320, 512], \n        depth=[3, 4, 8, 3], \n        num_heads=16, \n        mlp_ratio=[8, 8, 4, 4], \n        qkv_bias=True,\n        norm_layer=partial(nn.LayerNorm, eps=1e-6))\n    output=model(input)\n    print(output.shape)\n\n```\n\n### 23 CMT 使用方法\n#### 23.1. 论文\n[CMT：卷积神经网络与视觉Transformer的结合](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06263)\n\n#### 23.2. 使用代码\n```python\n\nfrom model.backbone.CMT import CMT_Tiny\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = CMT_Tiny()\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 24 EfficientFormer 使用方法\n#### 24.1. 论文\n[EfficientFormer：以MobileNet速度运行的视觉Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.01191)\n\n#### 24.2. 使用代码\n```python\n\nfrom model.backbone.EfficientFormer import EfficientFormer\nimport torch\nfrom torch import nn\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = EfficientFormer(\n        layers=EfficientFormer_depth['l1'],\n        embed_dims=EfficientFormer_width['l1'],\n        downsamples=[True, True, True, True],\n        vit_num=1,\n    )\n    output=model(input)\n    print(output[0].shape)\n\n```\n\n### 25 ConvNeXtV2 使用方法\n#### 25.1. 论文\n[ConvNeXtV2：与掩码自编码器协同设计并扩展卷积网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00808)\n\n#### 25.2. 使用代码\n```python\n\nfrom model.backbone.convnextv2 import convnextv2_atto\nimport torch\nfrom torch import nn\n\nif __name__ == \"__main__\":\n    model = convnextv2_atto()\n    input = torch.randn(1, 3, 224, 224)\n    out = model(input)\n    print(out.shape)\n\n```\n\n\n\n\n\n# MLP系列\n\n- Pytorch实现的[\"RepMLP：将卷积重新参数化为全连接层用于图像识别---arXiv 2021.05.05\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01883v1.pdf)\n\n- Pytorch实现的[\"MLP-Mixer：一种面向视觉的全MLP架构---arXiv 2021.05.17\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01601.pdf)\n\n- Pytorch实现的[\"ResMLP：用于图像分类的前馈网络，支持数据高效的训练---arXiv 2021.05.07\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.03404.pdf)\n\n- Pytorch实现的[\"关注MLP---arXiv 2021.05.17\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08050)\n\n\n- Pytorch实现的[\"用于图像识别的稀疏MLP：自注意力真的必要吗？---arXiv 2021.09.12\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.05422)\n\n### 1. RepMLP 使用方法\n#### 1.1. 论文\n[\"RepMLP：将卷积重新参数化为全连接层用于图像识别\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01883v1.pdf)\n\n#### 1.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_1ecc96f94f06.png)\n\n#### 1.3. 使用代码\n```python\nfrom model.mlp.repmlp import RepMLP\nimport torch\nfrom torch import nn\n\nN=4 #batch size\nC=512 #input dim\nO=1024 #output dim\nH=14 #image height\nW=14 #image width\nh=7 #patch height\nw=7 #patch width\nfc1_fc2_reduction=1 #reduction ratio\nfc3_groups=8 # groups\nrepconv_kernels=[1,3,5,7] #kernel list\nrepmlp=RepMLP(C,O,H,W,h,w,fc1_fc2_reduction,fc3_groups,repconv_kernels=repconv_kernels)\nx=torch.randn(N,C,H,W)\nrepmlp.eval()\nfor module in repmlp.modules():\n    if isinstance(module, nn.BatchNorm2d) or isinstance(module, nn.BatchNorm1d):\n        nn.init.uniform_(module.running_mean, 0, 0.1)\n        nn.init.uniform_(module.running_var, 0, 0.1)\n        nn.init.uniform_(module.weight, 0, 0.1)\n        nn.init.uniform_(module.bias, 0, 0.1)\n\n#training result\nout=repmlp(x)\n#inference result\nrepmlp.switch_to_deploy()\ndeployout = repmlp(x)\n\nprint(((deployout-out)**2).sum())\n```\n\n### 2. MLP-Mixer 使用方法\n#### 2.1. 论文\n[\"MLP-Mixer：一种面向视觉的全MLP架构\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01601.pdf)\n\n#### 2.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_b4aa183d6d50.png)\n\n#### 2.3. 使用代码\n```python\nfrom model.mlp.mlp_mixer import MlpMixer\nimport torch\nmlp_mixer=MlpMixer(num_classes=1000,num_blocks=10,patch_size=10,tokens_hidden_dim=32,channels_hidden_dim=1024,tokens_mlp_dim=16,channels_mlp_dim=1024)\ninput=torch.randn(50,3,40,40)\noutput=mlp_mixer(input)\nprint(output.shape)\n```\n\n***\n\n### 3. ResMLP 使用方法\n#### 3.1. 论文\n[\"ResMLP：用于图像分类的前馈网络，支持数据高效训练\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.03404.pdf)\n\n#### 3.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_10d74a989156.png)\n\n#### 3.3. 使用代码\n```python\nfrom model.mlp.resmlp import ResMLP\nimport torch\n\ninput=torch.randn(50,3,14,14)\nresmlp=ResMLP(dim=128,image_size=14,patch_size=7,class_num=1000)\nout=resmlp(input)\nprint(out.shape) #最后一维是类别数\n```\n\n***\n\n### 4. gMLP 使用方法\n#### 4.1. 论文\n[\"关注MLP\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.08050)\n\n#### 4.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_be4096c31838.jpg)\n\n#### 4.3. 使用代码\n```python\nfrom model.mlp.g_mlp import gMLP\nimport torch\n\nnum_tokens=10000\nbs=50\nlen_sen=49\nnum_layers=6\ninput=torch.randint(num_tokens,(bs,len_sen)) #batch size,序列长度\ngmlp = gMLP(num_tokens=num_tokens,len_sen=len_sen,dim=512,d_ff=1024)\noutput=gmlp(input)\nprint(output.shape)\n```\n\n***\n\n### 5. sMLP 使用方法\n#### 5.1. 论文\n[\"用于图像识别的稀疏MLP：自注意力真的必要吗？\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.05422)\n\n#### 5.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_4f1121cdf942.jpg)\n\n#### 5.3. 使用代码\n```python\nfrom model.mlp.sMLP_block import sMLPBlock\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(50,3,224,224)\n    smlp=sMLPBlock(h=224,w=224)\n    out=smlp(input)\n    print(out.shape)\n```\n\n### 6. vip-mlp 使用方法\n#### 6.1. 论文\n[\"视觉置换器：一种可置换的类MLP架构，用于视觉识别\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12368)\n\n#### 6.2. 使用代码\n```python\nfrom model.mlp.vip-mlp import VisionPermutator\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(1,3,224,224)\n    model = VisionPermutator(\n        layers=[4, 3, 8, 3], \n        embed_dims=[384, 384, 384, 384], \n        patch_size=14, \n        transitions=[False, False, False, False],\n        segment_dim=[16, 16, 16, 16], \n        mlp_ratios=[3, 3, 3, 3], \n        mlp_fn=WeightedPermuteMLP\n    )\n    output=model(input)\n    print(output.shape)\n```\n\n\n# 重参数化系列\n\n- [\"RepVGG：让VGG风格的卷积神经网络再次伟大——CVPR2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03697) 的 PyTorch 实现\n\n- [\"ACNet：通过非对称卷积块强化内核骨架，打造强大的CNN——ICCV2019\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03930) 的 PyTorch 实现\n\n- [\"多样分支模块：将卷积构建为类似Inception的单元——CVPR2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13425) 的 PyTorch 实现\n\n\n***\n\n### 1. RepVGG 使用方法\n#### 1.1. 论文\n[\"RepVGG：让VGG风格的卷积神经网络再次伟大\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03697)\n\n#### 1.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_81a02a397a95.png)\n\n#### 1.3. 使用代码\n```python\n\nfrom model.rep.repvgg import RepBlock\nimport torch\n\n\ninput=torch.randn(50,512,49,49)\nrepblock=RepBlock(512,512)\nrepblock.eval()\nout=repblock(input)\nrepblock._switch_to_deploy()\nout2=repblock(input)\nprint('VGG与RepVGG之间的差异')\nprint(((out2-out)**2).sum())\n```\n\n\n\n***\n\n### 2. ACNet 使用方法\n#### 2.1. 论文\n[\"ACNet：通过非对称卷积块强化内核骨架，打造强大的CNN\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.03930)\n\n#### 2.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_35f66dcfbd64.png)\n\n#### 2.3. 使用代码\n```python\nfrom model.rep.acnet import ACNet\nimport torch\nfrom torch import nn\n\ninput=torch.randn(50,512,49,49)\nacnet=ACNet(512,512)\nacnet.eval()\nout=acnet(input)\nacnet._switch_to_deploy()\nout2=acnet(input)\nprint('差异：')\nprint(((out2-out)**2).sum())\n\n```\n\n\n\n***\n\n### 2. 多样分支模块使用方法\n#### 2.1. 论文\n[\"多样分支模块：将卷积构建为类似Inception的单元\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13425)\n\n#### 2.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_2b128a19cd34.png)\n\n#### 2.3. 使用代码\n##### 2.3.1 变换I\n```python\nfrom model.rep.ddb import transI_conv_bn\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n#卷积+批归一化\nconv1=nn.Conv2d(64,64,3,padding=1)\nbn1=nn.BatchNorm2d(64)\nbn1.eval()\nout1=bn1(conv1(input))\n\n#融合卷积\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transI_conv_bn(conv1,bn1)\nout2=conv_fuse(input)\n\nprint(\"差异:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.2 变换II\n```python\nfrom model.rep.ddb import transII_conv_branch\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#两个卷积相加\nconv1=nn.Conv2d(64,64,3,padding=1)\nconv2=nn.Conv2d(64,64,3,padding=1)\nout1=conv1(input)+conv2(input)\n\n#融合卷积\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transII_conv_branch(conv1,conv2)\nout2=conv_fuse(input)\n\nprint(\"差异:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.3 变换III\n```python\nfrom model.rep.ddb import transIII_conv_sequential\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#两个卷积相加\nconv1=nn.Conv2d(64,64,1,padding=0,bias=False)\nconv2=nn.Conv2d(64,64,3,padding=1,bias=False)\nout1=conv2(conv1(input))\n\n\n#融合卷积\nconv_fuse=nn.Conv2d(64,64,3,padding=1,bias=False)\nconv_fuse.weight.data=transIII_conv_sequential(conv1,conv2)\nout2=conv_fuse(input)\n\nprint(\"差异:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.4 变换IV\n```python\nfrom model.rep.ddb import transIV_conv_concat\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,64,7,7)\n\n#两个卷积相加\nconv1=nn.Conv2d(64,32,3,padding=1)\nconv2=nn.Conv2d(64,32,3,padding=1)\nout1=torch.cat([conv1(input),conv2(input)],dim=1)\n\n#融合卷积\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transIV_conv_concat(conv1,conv2)\nout2=conv_fuse(input。\n\nprint(\"差异:\",((out2-out1)**2).sum().item())\n```\n\n##### 2.3.5 变换V\n```python\nfrom model.rep.ddb import transV_avg\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n输入=torch.randn(1,64,7,7)\n\navg=nn.AvgPool2d(kernel_size=3,stride=1)\nout1=avg(input)\n\nconv=transV_avg(64,3)\nout2=conv(input。\n\nprint(\"差异:\",((out2-out1)**2).sum().item())\n```\n\n\n##### 2.3.6 变换VI\n```python\nfrom model.rep.ddb import transVI_conv_scale\nimport torch\nfrom torch import nn\n从torch.nn中导入functional as F\n\n输入=torch.randn(1,64,7,7)\n\n#两个卷积相加\nconv1x1=nn.Conv2d(64,64,1)\nconv1x3=nn.Conv2d(64,64,(1,3),padding=(0,1))\nconv3x1=nn.Conv2d(64,64,(3,1),padding=(1,0))\nout1=conv1x1(input)+conv1x3(input)+conv3x1(input。\n\n#融合卷积\nconv_fuse=nn.Conv2d(64,64,3,padding=1)\nconv_fuse.weight.data,conv_fuse.bias.data=transVI_conv_scale(conv1x1,conv1x3,conv3x1)\nout2=conv_fuse(input。\n\nprint(\"差异:\",((out2-out1)**2).sum().item())。\n```\n\n# 卷积系列\n\n- PyTorch 实现的 [\"MobileNets: 面向移动视觉应用的高效卷积神经网络---CVPR2017\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861)\n\n- PyTorch 实现的 [\"EfficientNet: 重新思考卷积神经网络的模型缩放---PMLR2019\"](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftan19a.html)\n\n- PyTorch 实现的 [\"Involution: 针对视觉识别任务反转卷积的固有特性---CVPR2021\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06255)\n\n- PyTorch 实现的 [\"动态卷积：卷积核上的注意力机制---CVPR2020 口头报告\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.03458)\n\n- PyTorch 实现的 [\"CondConv: 用于高效推理的条件参数化卷积---NeurIPS2019\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04971)\n\n***\n\n### 1. 深度可分离卷积的使用\n#### 1.1. 论文\n[\"MobileNets: 面向移动视觉应用的高效卷积神经网络\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04861)\n\n#### 1.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_e9b292b9f865.png)\n\n#### 1.3. 使用代码\n```python\nfrom model.conv.DepthwiseSeparableConvolution import DepthwiseSeparableConvolution\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,3,224,224)\ndsconv=DepthwiseSeparableConvolution(3,64)\nout=dsconv(input)\nprint(out.shape)\n```\n\n***\n\n\n### 2. MBConv 的使用\n#### 2.1. 论文\n[\"EfficientNet: 重新思考卷积神经网络的模型缩放\"](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftan19a.html)\n\n#### 2.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_9fe195963337.jpg)\n\n#### 2.3. 使用代码\n```python\nfrom model.conv.MBConv import MBConvBlock\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,3,224,224)\nmbconv=MBConvBlock(ksize=3,input_filters=3,output_filters=512,image_size=224)\nout=mbconv(input)\nprint(out.shape)\n```\n\n***\n\n\n### 3. Involution 的使用\n#### 3.1. 论文\n[\"Involution: 针对视觉识别任务反转卷积的固有特性\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06255)\n\n#### 3.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_af9656a72a2d.png)\n\n#### 3.3. 使用代码\n```python\nfrom model.conv.Involution import Involution\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ninput=torch.randn(1,4,64,64)\ninvolution=Involution(kernel_size=3,in_channel=4,stride=2)\nout=involution(input)\nprint(out.shape)\n```\n\n***\n\n\n### 4. DynamicConv 的使用\n#### 4.1. 论文\n[\"动态卷积：卷积核上的注意力机制\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.03458)\n\n#### 4.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_54c450055ecd.png)\n\n#### 4.3. 使用代码\n```python\nfrom model.conv.DynamicConv import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nif __name__ == '__main__':\n    input=torch.randn(2,32,64,64)\n    m=DynamicConv(in_planes=32,out_planes=64,kernel_size=3,stride=1,padding=1,bias=False)\n    out=m(input)\n    print(out.shape) # 2,32,64,64\n\n```\n\n***\n\n\n### 5. CondConv 的使用\n#### 5.1. 论文\n[\"CondConv: 用于高效推理的条件参数化卷积\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04971)\n\n#### 5.2. 概述\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_56f0d9379400.png)\n\n#### 5.3. 使用代码\n```python\nfrom model.conv.CondConv import *\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\n\n\n\nif __name__ == '__main__':\n    input=torch.randn(2,32,64,64)\n    m=CondConv(in_planes=32,out_planes=64,kernel_size=3,stride=1,padding=1,bias=False)\n    out=m(input)\n    print(out.shape)\n```\n\n\n\n## 其他项目推荐\n\n-------\n\n🔥🔥🔥 重磅！！！作为项目补充，更多论文层面的解析，可以关注新开源的项目 **[FightingCV-Paper-Reading](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FFightingCV-Paper-Reading)** ，里面汇集和整理了各大顶会顶刊的论文解析\n\n\n\n🔥🔥🔥重磅！！！ 最近为大家整理了网上的各种AI相关的视频教程和必读论文 **[FightingCV-Course\n](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FFightingCV-Course)**\n\n\n🔥🔥🔥 重磅！！！最近全新开源了一个 **[YOLOAir](https:\u002F\u002Fgithub.com\u002Fiscyy\u002Fyoloair)** 目标检测代码库 ，里面集成了多种YOLO模型，包括YOLOv5, YOLOv7,YOLOR, YOLOX,YOLOv4, YOLOv3以及其他YOLO模型，还包括多种现有Attention机制。\n\n\n🔥🔥🔥 **ECCV2022论文汇总：[ECCV2022-Paper-List](https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FECCV2022-Paper-List\u002Fblob\u002Fmaster\u002FREADME.md)**\n\n\n\u003C!-- ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_readme_f560de3b1c3b.png) -->","# External-Attention-pytorch 快速上手指南\n\nExternal-Attention-pytorch 是一个基于 PyTorch 的开源代码库，集成了多种主流的注意力机制（Attention）、骨干网络（Backbone）、MLP 架构及重参数化模块。本项目旨在提供语义完整、易于集成的“乐高式”组件，帮助开发者快速复现论文思想或构建新模型。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS, Windows\n*   **Python 版本**：>= 3.0\n*   **深度学习框架**：PyTorch >= 1.4\n*   **依赖库**：`torch`, `torchvision` (通常随 PyTorch 自动安装)\n\n> **提示**：国内用户建议使用清华源或阿里源加速 Python 包的安装。\n\n## 安装步骤\n\n您可以通过以下两种方式之一进行安装：\n\n### 方式一：通过 pip 安装（推荐）\n\n直接安装预发布的 Python 包，适合快速调用内置模块。\n\n```shell\npip install fightingcv-attention\n```\n\n*国内加速安装：*\n```shell\npip install fightingcv-attention -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 方式二：通过 Git 克隆源码\n\n适合需要查看源码、修改实现或使用最新未发布功能的开发者。\n\n```shell\ngit clone https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch.git\ncd External-Attention-pytorch\n```\n\n## 基本使用\n\n本库支持两种导入方式，请根据您的安装方式选择对应的代码。以下以 `MobileViTv2Attention` 模块为例：\n\n### 场景 A：使用 pip 安装包\n\n如果您通过 `pip install` 安装，直接从 `fightingcv_attention` 包导入。\n\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n# 从 fightingcv_attention 包导入\nfrom fightingcv_attention.attention.MobileViTv2Attention import *\n\nif __name__ == '__main__':\n    # 构造示例输入: [Batch_Size, Sequence_Length, Feature_Dim]\n    input = torch.randn(50, 49, 512)\n    \n    # 初始化注意力模块\n    sa = MobileViTv2Attention(d_model=512)\n    \n    # 前向传播\n    output = sa(input)\n    \n    print(output.shape)\n```\n\n### 场景 B：使用 Git 源码\n\n如果您克隆了仓库，需将导入路径中的包名替换为本地目录名 `model`。\n\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n# 从本地 model 目录导入\nfrom model.attention.MobileViTv2Attention import *\n\nif __name__ == '__main__':\n    # 构造示例输入\n    input = torch.randn(50, 49, 512)\n    \n    # 初始化注意力模块\n    sa = MobileViTv2Attention(d_model=512)\n    \n    # 前向传播\n    output = sa(input)\n    \n    print(output.shape)\n```\n\n### 集成到您的模型中\n\n只需将上述初始化代码嵌入到您现有的 `nn.Module` 子类中即可。该库涵盖了包括 SE, CBAM, ECA, External Attention 等在内的 30+ 种注意力机制，以及 ResNet, MobileViT, ConvNeXt 等多种骨干网络，使用方法类似，仅需更改导入路径和类名。","某计算机视觉算法工程师正在复现一篇最新的图像分割论文，试图将其中提出的新型注意力机制集成到现有的 PyTorch 训练框架中以提升模型精度。\n\n### 没有 External-Attention-pytorch 时\n- **代码定位困难**：论文核心思想虽简单，但作者开源的源码往往与特定的检测或分割框架深度耦合，导致难以从冗余的工程代码中剥离出仅含十几行的核心注意力模块。\n- **重复造轮子耗时**：开发者需要手动重写 CBAM、SE 或 External Attention 等常见模块，不仅耗费大量时间调试维度匹配问题，还容易引入难以察觉的 Bug。\n- **理解门槛高**：对于刚入门的研究者，缺乏标准化、语义清晰的独立组件，导致在阅读论文和对照代码时产生认知断层，难以快速验证想法。\n- **实验迭代缓慢**：每次尝试不同的注意力机制（如从 SE 切换到 SK Attention）都需要重新查找资料并编写新类，严重拖慢了消融实验的进度。\n\n### 使用 External-Attention-pytorch 后\n- **即插即用高效集成**：通过 `pip install fightingcv-attention` 即可直接导入如 `MobileViTv2Attention` 等模块，无需关心底层实现细节，像搭乐高一样轻松嵌入现有网络。\n- **统一接口降低出错率**：库内所有注意力机制、MLP 及卷积模块均提供标准化的 PyTorch 接口，消除了手动重写时的维度对齐烦恼，确保代码健壮性。\n- **加速论文复现与理解**：提供了纯净的核心代码实现，帮助开发者直观对照论文公式与代码逻辑，真正实现了“让世界上没有难读的论文”。\n- **快速开展消融实验**：只需修改一行导入代码即可在 Self-Attention、ECA、DANet 等多种机制间自由切换，极大提升了模型优化和对比实验的效率。\n\nExternal-Attention-pytorch 通过将复杂的论文核心代码转化为标准化的“乐高组件”，让科研人员从繁琐的代码工程中解放出来，专注于算法创新本身。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxmu-xiaoma666_External-Attention-pytorch_44fd659e.png","xmu-xiaoma666","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fxmu-xiaoma666_369e5a17.jpg","Ph.D student of MAC Lab, Xiamen University. Intern of MinD (Machine IntelligeNce of Damo) Lab, @Alibaba.","Xiamen University","Xiamen, China",null,"https:\u002F\u002Fxmu-xiaoma666.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666",[82],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,12171,1953,"2026-04-09T14:47:23","MIT",1,"未说明",{"notes":93,"python":94,"dependencies":95},"该工具是一个包含多种注意力机制、骨干网络等模块的代码库，可通过 pip 直接安装或通过 git 克隆使用。README 中未明确指定操作系统、GPU 型号、显存大小及内存需求，仅标明了最低 Python 版本为 3.0，最低 PyTorch 版本为 1.4。由于是基础模块库，具体硬件需求取决于用户将其集成到的最终模型任务中。",">=3.0",[96,97],"torch>=1.4","fightingcv-attention",[14],[100,101,102,103,104,105,106,107],"attention","pytorch","paper","cbam","squeeze","excitation-networks","linear-layers","visual-tasks","2026-03-27T02:49:30.150509","2026-04-10T15:55:51.469844",[111,116,120,125,130,135,140,145],{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},27847,"External Attention 模块的输入维度格式是什么？d_model 和 S 参数分别代表什么？","输入特征通常为 (B, C, H, W)，在喂入 ExternalAttention 之前需要调整为 (B, H*W, C) 的格式。其中 d_model 指的是输入特征的通道数 C，S 是自定义的超参数。模块内部操作完成后，维度通常会再变换回来以进行残差连接等操作。","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F1",{"id":117,"question_zh":118,"answer_zh":119,"source_url":115},27848,"External Attention 模块应该添加在网络的哪个位置效果最好？","根据实验经验，该模块通常加在 Backbone（主干网络）的最后一层比较合适。例如在 ResNet 中，就是加在最后一个残差块之后。对于分割任务（如 Mask R-CNN），一般建议加在所有 Head（检测和分割头）之前的 Backbone 末端。",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},27849,"如何在论文中正确引用本项目？","除了引用原始论文作者的工作外，您可以在论文的脚注或参考文献部分附上本项目的 GitHub 链接。维护者建议在参考文献中以类似图片链接的形式展示项目地址，以感谢代码对研究工作的支持。","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F31",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},27850,"如何安装本项目？是否支持 pip 安装？","本项目已支持 pip 安装。您可以直接运行命令 `pip install dlutils_add` 来安装。如果不想使用 pip，也可以通过 `git clone` 克隆全部代码到本地使用。","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F3",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},27851,"运行代码时遇到 'ModuleNotFoundError: No module named attention' 错误如何解决？","这是一个路径导入问题。解决方法是在导入语句 `from attention.SelfAttention import ...` 之前，添加以下代码以将当前目录的上上级目录加入系统路径：\n`import sys`\n`import os`\n`sys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))`","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F30",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},27852,"PyTorch 中的 nn.Conv2d 或 nn.Linear 是否需要注册到 nn.ModuleList 或 nn.Sequential 中才能参与训练？","不需要。nn.ModuleList 和 nn.Sequential 只是用于将多个结构串联成一个整体模块，与参数是否参与训练无关。判断一个层是否被训练的唯一标准是：它是否在 forward 函数中被调用。只要在 forward 中使用了该层，即使没有显式注册到列表中，其参数也会被计入梯度更新。","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F20",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},27853,"PyTorch 的 nn.Linear 层可以接受 4 维输入吗？","可以。nn.Linear 层会对输入张量的最后一个维度进行投影（Projection），而保持其他维度不变。因此，无论输入是 2 维、3 维还是 4 维，只要最后一个维度匹配线性层的输入特征数，都可以正常工作。","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F11",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},27854,"MobileViT 实现代码中使用的激活函数与原文不一致怎么办？","如果您发现代码中使用了 ReLU 而原文描述为 Swish，这通常是实现时的疏忽。维护者在收到反馈后已确认并修正了该问题，将激活函数改回了原文指定的 Swish。如果您使用的是旧版本代码，请拉取最新代码或手动修改激活函数。","https:\u002F\u002Fgithub.com\u002Fxmu-xiaoma666\u002FExternal-Attention-pytorch\u002Fissues\u002F36",[]]