[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-microsoft--Swin-Transformer":3,"tool-microsoft--Swin-Transformer":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":102,"env_deps":104,"category_tags":113,"github_topics":114,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":154},3354,"microsoft\u002FSwin-Transformer","Swin-Transformer","This is an official implementation for \"Swin Transformer: Hierarchical Vision Transformer using Shifted Windows\".","Swin-Transformer 是一款基于“移位窗口”机制的层级化视觉 Transformer 模型，旨在为计算机视觉任务提供强大的骨干网络。它有效解决了传统 Vision Transformer 在处理高分辨率图像时计算量过大、难以捕捉多尺度特征以及缺乏层级结构等痛点，成功将 Transformer 架构的优势扩展至目标检测、实例分割、语义分割及视频动作识别等密集预测任务中。\n\n该工具的核心技术亮点在于引入了移位窗口策略，使模型能在局部窗口内高效计算自注意力，同时通过窗口间的移动建立全局联系；配合层级化设计，使其能像卷积神经网络一样生成多尺度特征图。此外，Swin-Transformer 系列持续演进，不仅推出了支持更大容量和更稳定训练的 Swin V2 版本，还集成了掩码图像建模（SimMIM）、特征蒸馏及混合专家系统（MoE）等先进预训练与优化技术，并在多个国际基准测试中刷新了性能纪录。\n\nSwin-Transformer 非常适合人工智能研究人员、算法工程师及深度学习开发者使用。无论是希望复现前沿论文成果、探索大规模预训练模型特性，还是需要将高性能视觉模型落地到实际工业场景中","Swin-Transformer 是一款基于“移位窗口”机制的层级化视觉 Transformer 模型，旨在为计算机视觉任务提供强大的骨干网络。它有效解决了传统 Vision Transformer 在处理高分辨率图像时计算量过大、难以捕捉多尺度特征以及缺乏层级结构等痛点，成功将 Transformer 架构的优势扩展至目标检测、实例分割、语义分割及视频动作识别等密集预测任务中。\n\n该工具的核心技术亮点在于引入了移位窗口策略，使模型能在局部窗口内高效计算自注意力，同时通过窗口间的移动建立全局联系；配合层级化设计，使其能像卷积神经网络一样生成多尺度特征图。此外，Swin-Transformer 系列持续演进，不仅推出了支持更大容量和更稳定训练的 Swin V2 版本，还集成了掩码图像建模（SimMIM）、特征蒸馏及混合专家系统（MoE）等先进预训练与优化技术，并在多个国际基准测试中刷新了性能纪录。\n\nSwin-Transformer 非常适合人工智能研究人员、算法工程师及深度学习开发者使用。无论是希望复现前沿论文成果、探索大规模预训练模型特性，还是需要将高性能视觉模型落地到实际工业场景中，它都提供了完善的代码实现与丰富的预训练权重支持，是构建下一代视觉系统的理想选择。","# Swin Transformer\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Fobject-detection-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco?p=swin-transformer-v2-scaling-up-capacity-and)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Finstance-segmentation-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco?p=swin-transformer-v2-scaling-up-capacity-and)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Fsemantic-segmentation-on-ade20k)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-ade20k?p=swin-transformer-v2-scaling-up-capacity-and)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Faction-classification-on-kinetics-400)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Faction-classification-on-kinetics-400?p=swin-transformer-v2-scaling-up-capacity-and)\n\nThis repo is the official implementation of [\"Swin Transformer: Hierarchical Vision Transformer using Shifted Windows\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.14030.pdf) as well as the follow-ups. It currently includes code and models for the following tasks:\n\n> **Image Classification**: Included in this repo. See [get_started.md](get_started.md) for a quick start.\n\n> **Object Detection and Instance Segmentation**: See [Swin Transformer for Object Detection](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Object-Detection).\n\n> **Semantic Segmentation**: See [Swin Transformer for Semantic Segmentation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Semantic-Segmentation).\n\n> **Video Action Recognition**: See [Video Swin Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer).\n\n> **Semi-Supervised Object Detection**: See [Soft Teacher](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSoftTeacher).\n\n> **SSL: Contrasitive Learning**: See [Transformer-SSL](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FTransformer-SSL).\n\n> **SSL: Masked Image Modeling**: See [get_started.md#simmim-support](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Fblob\u002Fmain\u002Fget_started.md#simmim-support).\n\n> **Mixture-of-Experts**: See [get_started](get_started.md#mixture-of-experts-support) for more instructions.\n\n> **Feature-Distillation**: See [Feature-Distillation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FFeature-Distillation).\n\n## Updates\n\n***12\u002F29\u002F2022***\n\n1. **Nvidia**'s [FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer\u002Fblob\u002Fmain\u002Fdocs\u002Fswin_guide.md) now supports Swin Transformer V2 inference, which have significant speed improvements on `T4 and A100 GPUs`.\n\n***11\u002F30\u002F2022***\n\n1. Models and codes of **Feature Distillation** are released. Please refer to [Feature-Distillation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FFeature-Distillation) for details, and the checkpoints (FD-EsViT-Swin-B, FD-DeiT-ViT-B, FD-DINO-ViT-B, FD-CLIP-ViT-B, FD-CLIP-ViT-L).\n\n***09\u002F24\u002F2022***\n\n1. Merged [SimMIM](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSimMIM), which is a **Masked Image Modeling** based pre-training approach applicable to Swin and SwinV2 (and also applicable for ViT and ResNet). Please refer to [get started with SimMIM](get_started.md#simmim-support) to play with SimMIM pre-training.\n\n2. Released a series of Swin and SwinV2 models pre-trained using the SimMIM approach (see [MODELHUB for SimMIM](MODELHUB.md#simmim-pretrained-swin-v2-models)), with model size ranging from SwinV2-Small-50M to SwinV2-giant-1B, data size ranging from ImageNet-1K-10% to ImageNet-22K, and iterations from 125k to 500k. You may leverage these models to study the properties of MIM methods. Please look into the [data scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04664) paper for more details.\n\n***07\u002F09\u002F2022***\n\n`News`: \n\n1. SwinV2-G achieves `61.4 mIoU` on ADE20K semantic segmentation (+1.5 mIoU over the previous SwinV2-G model), using an additional [feature distillation (FD)](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FFeature-Distillation) approach, **setting a new recrod** on this benchmark. FD is an approach that can generally improve the fine-tuning performance of various pre-trained models, including DeiT, DINO, and CLIP. Particularly, it improves CLIP pre-trained ViT-L by +1.6% to reach `89.0%` on ImageNet-1K image classification, which is **the most accurate ViT-L model**.\n2. Merged a PR from **Nvidia** that links to faster Swin Transformer inference that have significant speed improvements on `T4 and A100 GPUs`.\n3. Merged a PR from **Nvidia** that enables an option to use `pure FP16 (Apex O2)` in training, while almost maintaining the accuracy.\n\n***06\u002F03\u002F2022***\n\n1. Added **Swin-MoE**, the Mixture-of-Experts variant of Swin Transformer implemented using [Tutel](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftutel) (an optimized Mixture-of-Experts implementation). **Swin-MoE** is introduced in the [TuTel](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03382) paper.\n\n***05\u002F12\u002F2022***\n\n1. Pretrained models of [Swin Transformer V2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09883) on ImageNet-1K and ImageNet-22K are released. \n2. ImageNet-22K pretrained models for Swin-V1-Tiny and Swin-V2-Small are released.\n\n***03\u002F02\u002F2022***\n\n1. Swin Transformer V2 and SimMIM got accepted by CVPR 2022. [SimMIM](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSimMIM) is a self-supervised pre-training approach based on masked image modeling, a key technique that works out the 3-billion-parameter Swin V2 model using `40x less labelled data` than that of previous billion-scale models based on JFT-3B. \n\n***02\u002F09\u002F2022***\n\n1. Integrated into [Huggingface Spaces 🤗](https:\u002F\u002Fhuggingface.co\u002Fspaces) using [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio). Try out the Web Demo [![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FSwin-Transformer)\n\n***10\u002F12\u002F2021***\n\n1. Swin Transformer received ICCV 2021 best paper award (Marr Prize).\n\n***08\u002F09\u002F2021***\n1. [Soft Teacher](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09018v2.pdf) will appear at ICCV2021. The code will be released at [GitHub Repo](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSoftTeacher). `Soft Teacher` is an end-to-end semi-supervisd object detection method, achieving a new record on the COCO test-dev: `61.3 box AP` and `53.0 mask AP`.\n \n***07\u002F03\u002F2021***\n1. Add **Swin MLP**, which is an adaption of `Swin Transformer` by replacing all multi-head self-attention (MHSA) blocks by MLP layers (more precisely it is a group linear layer). The shifted window configuration can also significantly improve the performance of vanilla MLP architectures. \n\n***06\u002F25\u002F2021***\n1. [Video Swin Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13230) is released at [Video-Swin-Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer).\n`Video Swin Transformer` achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including action recognition (`84.9` top-1 accuracy on Kinetics-400 and `86.1` top-1 accuracy on Kinetics-600 with `~20x` less pre-training data and `~3x` smaller model size) and temporal modeling (`69.6` top-1 accuracy on Something-Something v2).\n\n***05\u002F12\u002F2021***\n1. Used as a backbone for `Self-Supervised Learning`: [Transformer-SSL](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FTransformer-SSL)\n\nUsing Swin-Transformer as the backbone for self-supervised learning enables us to evaluate the transferring performance of the learnt representations on down-stream tasks, which is missing in previous works due to the use of ViT\u002FDeiT, which has not been well tamed for down-stream tasks.\n\n***04\u002F12\u002F2021***\n\nInitial commits:\n\n1. Pretrained models on ImageNet-1K ([Swin-T-IN1K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_tiny_patch4_window7_224.pth), [Swin-S-IN1K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_small_patch4_window7_224.pth), [Swin-B-IN1K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224.pth)) and ImageNet-22K ([Swin-B-IN22K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224_22k.pth), [Swin-L-IN22K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window7_224_22k.pth)) are provided.\n2. The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.\n3. The cuda kernel implementation for the [local relation layer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.11491.pdf) is provided in branch [LR-Net](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Ftree\u002FLR-Net).\n\n## Introduction\n\n**Swin Transformer** (the name `Swin` stands for **S**hifted **win**dow) is initially described in [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14030), which capably serves as a\ngeneral-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is\ncomputed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention\ncomputation to non-overlapping local windows while also allowing for cross-window connection.\n\nSwin Transformer achieves strong performance on COCO object detection (`58.7 box AP` and `51.1 mask AP` on test-dev) and\nADE20K semantic segmentation (`53.5 mIoU` on val), surpassing previous models by a large margin.\n\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_Swin-Transformer_readme_b570a2563714.png)\n\n## Main Results on ImageNet with Pretrained Models\n\n**ImageNet-1K and ImageNet-22K Pretrained Swin-V1 Models**\n\n| name | pretrain | resolution |acc@1 | acc@5 | #params | FLOPs | FPS| 22K model | 1K model |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |:---: |\n| Swin-T | ImageNet-1K | 224x224 | 81.2 | 95.5 | 28M | 4.5G | 755 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_tiny_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F156nWJy4Q28rDlrX-rRbI3w)\u002F[config](configs\u002Fswin\u002Fswin_tiny_patch4_window7_224.yaml)\u002F[log](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Ffiles\u002F7745562\u002Flog_swin_tiny_patch4_window7_224.txt) |\n| Swin-S | ImageNet-1K | 224x224 | 83.2 | 96.2 | 50M | 8.7G | 437 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_small_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1KFjpj3Efey3LmtE1QqPeQg)\u002F[config](configs\u002Fswin\u002Fswin_small_patch4_window7_224.yaml)\u002F[log](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Ffiles\u002F7745563\u002Flog_swin_small_patch4_window7_224.txt) |\n| Swin-B | ImageNet-1K | 224x224 | 83.5 | 96.5 | 88M | 15.4G | 278  | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F16bqCTEc70nC_isSsgBSaqQ)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window7_224.yaml)\u002F[log](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Ffiles\u002F7745564\u002Flog_swin_base_patch4_window7_224.txt) |\n| Swin-B | ImageNet-1K | 384x384 | 84.5 | 97.0 | 88M | 47.1G | 85 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window12_384.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xT1cu740-ejW7htUdVLnmw)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window12_384_finetune.yaml) |\n| Swin-T | ImageNet-22K | 224x224 | 80.9 | 96.0 | 28M | 4.5G | 755 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_tiny_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1vct0VYwwQQ8PYkBjwSSBZQ?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_tiny_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_tiny_patch4_window7_224_22kto1k_finetune.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1K0OO-nGZDPkR8fm_r83e8Q?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_tiny_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-S | ImageNet-22K | 224x224 | 83.2 | 97.0 | 50M | 8.7G | 437 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_small_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11NC1xdT5BAGBgazdTme5Sg?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_small_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_small_patch4_window7_224_22kto1k_finetune.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10RFVfjQJhwPfeHrmxQUaLw?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_small_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-B | ImageNet-22K | 224x224 | 85.2 | 97.5 | 88M | 15.4G | 278 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1y1Ec3UlrKSI8IMtEs-oBXA)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1n_wNkcbRxVXit8r_KrfAVg)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-B | ImageNet-22K | 384x384 | 86.4 | 98.0 | 88M | 47.1G | 85 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window12_384_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1vwJxnJcVqcLZAw9HaqiR6g) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window12_384_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1caKTSdoLJYoi4WBcnmWuWg)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window12_384_22kto1k_finetune.yaml) |\n| Swin-L | ImageNet-22K | 224x224 | 86.3 | 97.9 | 197M | 34.5G | 141 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pws3rOTFuOebBYP3h6Kx8w)\u002F[config](configs\u002Fswin\u002Fswin_large_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window7_224_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1NkQApMWUhxBGjk1ne6VqBQ)\u002F[config](configs\u002Fswin\u002Fswin_large_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-L | ImageNet-22K | 384x384 | 87.3 | 98.2 | 197M | 103.9G | 42 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window12_384_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1sl7o_bJA143OD7UqSLAMoA) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window12_384_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1X0FLHQyPOC6Kmv2CmgxJvA)\u002F[config](configs\u002Fswin\u002Fswin_large_patch4_window12_384_22kto1k_finetune.yaml) |\n\n**ImageNet-1K and ImageNet-22K Pretrained Swin-V2 Models**\n\n| name | pretrain | resolution | window |acc@1 | acc@5 | #params | FLOPs | FPS |22K model | 1K model |\n|:---------------------:| :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---:|:---: |:---: |\n| SwinV2-T | ImageNet-1K | 256x256 | 8x8 | 81.8 | 95.9 | 28M | 5.9G | 572 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_tiny_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1RzLkAH_5OtfRCJe6Vlg6rg?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_tiny_patch4_window8_256.yaml) |\n| SwinV2-S | ImageNet-1K | 256x256 | 8x8 | 83.7 | 96.6 | 50M | 11.5G | 327 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_small_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F195PdA41szEduW3jEtRSa4Q?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_small_patch4_window8_256.yaml) |\n| SwinV2-B | ImageNet-1K | 256x256 | 8x8 | 84.2 | 96.9 | 88M | 20.3G | 217 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18AfMSz3dPyzIvP1dKuERvQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window8_256.yaml) |\n| SwinV2-T | ImageNet-1K | 256x256 | 16x16 | 82.8 | 96.2 | 28M | 6.6G | 437 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_tiny_patch4_window16_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dyK3cK9Xipmv6RnTtrPocw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_tiny_patch4_window16_256.yaml) |\n| SwinV2-S | ImageNet-1K | 256x256 | 16x16 | 84.1 | 96.8 | 50M | 12.6G  | 257 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_small_patch4_window16_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZIPiSfWNKTPp821Ka-Mifw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_small_patch4_window16_256.yaml) |\n| SwinV2-B | ImageNet-1K | 256x256 | 16x16 | 84.6 | 97.0 | 88M | 21.8G | 174 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window16_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dlDQGn8BXCmnh7wQSM5Nhw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window16_256.yaml) |\n| SwinV2-B\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 256x256 | 16x16 | 86.2 | 97.9 |  88M | 21.8G | 174 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xc2rsSsRQz_sy5mjgfxrMQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12to16_192to256_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1sgstld4MgGsZxhUAW7MlmQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12to16_192to256_22kto1k_ft.yaml) |\n| SwinV2-B\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 384x384 | 24x24 | 87.1 | 98.2 | 88M | 54.7G | 57  | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xc2rsSsRQz_sy5mjgfxrMQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12to24_192to384_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17u3sEQaUYlvfL195rrORzQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12to24_192to384_22kto1k_ft.yaml) |\n| SwinV2-L\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 256x256 | 16x16 | 86.9 | 98.0 | 197M | 47.5G | 95  | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11PhCV7qAGXtZ8dXNgyiGOw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12to16_192to256_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pqp31N80qIWjFPbudzB6Bw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12to16_192to256_22kto1k_ft.yaml) |\n| SwinV2-L\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 384x384 | 24x24 | 87.6 | 98.3 | 197M | 115.4G | 33  | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11PhCV7qAGXtZ8dXNgyiGOw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12to24_192to384_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F13URdNkygr3Xn0N3e6IwjgA?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12to24_192to384_22kto1k_ft.yaml) |\n\nNote: \n- SwinV2-B\u003Csup>\\*\u003C\u002Fsup>  (SwinV2-L\u003Csup>\\*\u003C\u002Fsup>) with input resolution of 256x256 and 384x384 both fine-tuned from the same pre-training model using a smaller input resolution of 192x192.\n- SwinV2-B\u003Csup>\\*\u003C\u002Fsup> (384x384) achieves 78.08 acc@1 on ImageNet-1K-V2 while SwinV2-L\u003Csup>\\*\u003C\u002Fsup> (384x384) achieves 78.31.\n\n**ImageNet-1K Pretrained Swin MLP Models**\n\n| name | pretrain | resolution |acc@1 | acc@5 | #params | FLOPs | FPS |  1K model |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| [Mixer-B\u002F16](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01601.pdf) | ImageNet-1K | 224x224 | 76.4 | - | 59M | 12.7G | - | [official repo](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer) |\n| [ResMLP-S24](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03404) | ImageNet-1K | 224x224 | 79.4 | - | 30M | 6.0G | 715 | [timm](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models) |\n| [ResMLP-B24](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03404) | ImageNet-1K | 224x224 | 81.0 | - | 116M | 23.0G |  231 | [timm](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models) |\n| Swin-T\u002FC24 | ImageNet-1K | 256x256 | 81.6 | 95.7 | 28M | 5.9G | 563 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_tiny_c24_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17k-7l6Sxt7uZ7IV0f26GNQ)\u002F[config](configs\u002Fswin\u002Fswin_tiny_c24_patch4_window8_256.yaml) |\n| SwinMLP-T\u002FC24 | ImageNet-1K | 256x256 | 79.4 | 94.6 | 20M | 4.0G | 807 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_tiny_c24_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Sa4vP5R0M2RjfIe9HIga-Q)\u002F[config](configs\u002Fswin\u002Fswin_mlp_tiny_c24_patch4_window8_256.yaml) |\n| SwinMLP-T\u002FC12 | ImageNet-1K | 256x256 | 79.6 | 94.7 | 21M | 4.0G | 792 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_tiny_c12_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mM9J2_DEVZHUB5ASIpFl0w)\u002F[config](configs\u002Fswin\u002Fswin_mlp_tiny_c12_patch4_window8_256.yaml) |\n| SwinMLP-T\u002FC6 | ImageNet-1K | 256x256 | 79.7 | 94.9 | 23M | 4.0G | 766 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_tiny_c6_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hUTYVT2W1CsjICw-3W-Vjg)\u002F[config](configs\u002Fswin\u002Fswin_mlp_tiny_c6_patch4_window8_256.yaml) |\n| SwinMLP-B | ImageNet-1K | 224x224 | 81.3 | 95.3 | 61M | 10.4G | 409 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_base_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1zww3dnbX3GxNiGfb-GwyUg)\u002F[config](configs\u002Fswin\u002Fswin_mlp_base_patch4_window7_224.yaml) |\n\nNote: access code for `baidu` is `swin`. C24 means each head has 24 channels.\n\n**ImageNet-22K Pretrained Swin-MoE Models**\n\n- Please refer to [get_started](get_started.md#mixture-of-experts-support) for instructions on running Swin-MoE. \n- Pretrained models for Swin-MoE can be found in [MODEL HUB](MODELHUB.md#imagenet-22k-pretrained-swin-moe-models)\n\n## Main Results on Downstream Tasks\n\n**COCO Object Detection (2017 val)**\n\n| Backbone | Method | pretrain | Lr Schd | box mAP | mask mAP | #params | FLOPs |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| Swin-T | Mask R-CNN | ImageNet-1K | 3x | 46.0 | 41.6 | 48M | 267G |\n| Swin-S | Mask R-CNN | ImageNet-1K | 3x | 48.5 | 43.3 | 69M | 359G |\n| Swin-T | Cascade Mask R-CNN | ImageNet-1K | 3x | 50.4 | 43.7 | 86M | 745G |\n| Swin-S | Cascade Mask R-CNN | ImageNet-1K |  3x | 51.9 | 45.0 | 107M | 838G |\n| Swin-B | Cascade Mask R-CNN | ImageNet-1K |  3x | 51.9 | 45.0 | 145M | 982G |\n| Swin-T | RepPoints V2 | ImageNet-1K | 3x | 50.0 | - | 45M | 283G |\n| Swin-T | Mask RepPoints V2 | ImageNet-1K | 3x | 50.3 | 43.6 | 47M | 292G |\n| Swin-B | HTC++ | ImageNet-22K | 6x | 56.4 | 49.1 | 160M | 1043G |\n| Swin-L | HTC++ | ImageNet-22K | 3x | 57.1 | 49.5 | 284M | 1470G |\n| Swin-L | HTC++\u003Csup>*\u003C\u002Fsup> | ImageNet-22K | 3x | 58.0 | 50.4 | 284M | - |\n\nNote: \u003Csup>*\u003C\u002Fsup> indicates multi-scale testing.\n\n**ADE20K Semantic Segmentation (val)**\n\n| Backbone | Method | pretrain | Crop Size | Lr Schd | mIoU | mIoU (ms+flip) | #params | FLOPs |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| Swin-T | UPerNet | ImageNet-1K | 512x512 | 160K | 44.51 | 45.81 | 60M | 945G |\n| Swin-S | UperNet | ImageNet-1K | 512x512 | 160K | 47.64 | 49.47 | 81M | 1038G |\n| Swin-B | UperNet | ImageNet-1K | 512x512 | 160K | 48.13 | 49.72 | 121M | 1188G |\n| Swin-B | UPerNet | ImageNet-22K | 640x640 | 160K | 50.04 | 51.66 | 121M | 1841G |\n| Swin-L | UperNet | ImageNet-22K | 640x640 | 160K | 52.05 | 53.53 | 234M | 3230G |\n\n## Citing Swin Transformer\n\n```\n@inproceedings{liu2021Swin,\n  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},\n  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},\n  booktitle={Proceedings of the IEEE\u002FCVF International Conference on Computer Vision (ICCV)},\n  year={2021}\n}\n```\n## Citing Local Relation Networks (the first full-attention visual backbone)\n```\n@inproceedings{hu2019local,\n  title={Local Relation Networks for Image Recognition},\n  author={Hu, Han and Zhang, Zheng and Xie, Zhenda and Lin, Stephen},\n  booktitle={Proceedings of the IEEE\u002FCVF International Conference on Computer Vision (ICCV)},\n  pages={3464--3473},\n  year={2019}\n}\n```\n## Citing Swin Transformer V2\n```\n@inproceedings{liu2021swinv2,\n  title={Swin Transformer V2: Scaling Up Capacity and Resolution}, \n  author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},\n  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},\n  year={2022}\n}\n```\n## Citing SimMIM (a self-supervised approach that enables SwinV2-G)\n```\n@inproceedings{xie2021simmim,\n  title={SimMIM: A Simple Framework for Masked Image Modeling},\n  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han},\n  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},\n  year={2022}\n}\n```\n## Citing SimMIM-data-scaling\n```\n@article{xie2022data,\n  title={On Data Scaling in Masked Image Modeling},\n  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Wei, Yixuan and Dai, Qi and Hu, Han},\n  journal={arXiv preprint arXiv:2206.04664},\n  year={2022}\n}\n```\n## Citing Swin-MoE\n```\n@misc{hwang2022tutel,\n      title={Tutel: Adaptive Mixture-of-Experts at Scale}, \n      author={Changho Hwang and Wei Cui and Yifan Xiong and Ziyue Yang and Ze Liu and Han Hu and Zilong Wang and Rafael Salas and Jithin Jose and Prabhat Ram and Joe Chau and Peng Cheng and Fan Yang and Mao Yang and Yongqiang Xiong},\n      year={2022},\n      eprint={2206.03382},\n      archivePrefix={arXiv}\n}\n```\n\n## Getting Started\n\n- For **Image Classification**, please see [get_started.md](get_started.md) for detailed instructions.\n- For **Object Detection and Instance Segmentation**, please see [Swin Transformer for Object Detection](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Object-Detection).\n- For **Semantic Segmentation**, please see [Swin Transformer for Semantic Segmentation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Semantic-Segmentation).\n- For **Self-Supervised Learning**, please see [Transformer-SSL](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FTransformer-SSL).\n- For **Video Recognition**, please see [Video Swin Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer).\n\n## Third-party Usage and Experiments\n\n***In this pargraph, we cross link third-party repositories which use Swin and report results. You can let us know by raising an issue*** \n\n(`Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior`)\n\n[12\u002F29\u002F2022] Swin Transformers (V2) inference implemented in FasterTransformer: [FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer\u002Fblob\u002Fmain\u002Fdocs\u002Fswin_guide.md)\n\n[06\u002F30\u002F2022] Swin Transformers (V1) inference implemented in FasterTransformer: [FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer\u002Fblob\u002Fmain\u002Fdocs\u002Fswin_guide.md)\n\n[05\u002F12\u002F2022] Swin Transformers (V1) implemented in TensorFlow with the pre-trained parameters ported into them. Find the implementation,\nTensorFlow weights, code example here in [this repository](https:\u002F\u002Fgithub.com\u002Fsayakpaul\u002Fswin-transformers-tf\u002F).\n\n[04\u002F06\u002F2022] Swin Transformer for Audio Classification: [Hierarchical Token Semantic Audio Transformer](https:\u002F\u002Fgithub.com\u002FRetroCirce\u002FHTS-Audio-Transformer).\n\n[12\u002F21\u002F2021] Swin Transformer for StyleGAN: [StyleSwin](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FStyleSwin)\n\n[12\u002F13\u002F2021] Swin Transformer for Face Recognition: [FaceX-Zoo](https:\u002F\u002Fgithub.com\u002FJDAI-CV\u002FFaceX-Zoo)\n\n[08\u002F29\u002F2021] Swin Transformer for Image Restoration: [SwinIR](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR)\n\n[08\u002F12\u002F2021] Swin Transformer for person reID: [https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch)\n\n[06\u002F29\u002F2021] Swin-Transformer in PaddleClas and inference based on whl package: [https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleClas](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleClas)\n\n[04\u002F14\u002F2021] Swin for RetinaNet in Detectron: https:\u002F\u002Fgithub.com\u002Fxiaohu2015\u002FSwinT_detectron2.\n\n[04\u002F16\u002F2021] Included in a famous model zoo: https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models.\n\n[04\u002F20\u002F2021] Swin-Transformer classifier inference using TorchServe: https:\u002F\u002Fgithub.com\u002Fkamalkraj\u002FSwin-Transformer-Serve\n\n## Contributing\n\nThis project welcomes contributions and suggestions.  Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https:\u002F\u002Fcla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002F).\nFor more information see the [Code of Conduct FAQ](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002Ffaq\u002F) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Flegal\u002Fintellectualproperty\u002Ftrademarks\u002Fusage\u002Fgeneral).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n","# Swin Transformer\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Fobject-detection-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco?p=swin-transformer-v2-scaling-up-capacity-and)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Finstance-segmentation-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco?p=swin-transformer-v2-scaling-up-capacity-and)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Fsemantic-segmentation-on-ade20k)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-ade20k?p=swin-transformer-v2-scaling-up-capacity-and)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fswin-transformer-v2-scaling-up-capacity-and\u002Faction-classification-on-kinetics-400)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Faction-classification-on-kinetics-400?p=swin-transformer-v2-scaling-up-capacity-and)\n\n本仓库是论文《Swin Transformer：基于移位窗口的层次化视觉Transformer》及其后续工作的官方实现。目前包含以下任务的代码和模型：\n\n> **图像分类**：已包含在本仓库中。请参阅 [get_started.md](get_started.md) 以快速入门。\n\n> **目标检测与实例分割**：请参阅 [Swin Transformer 目标检测](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Object-Detection)。\n\n> **语义分割**：请参阅 [Swin Transformer 语义分割](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Semantic-Segmentation)。\n\n> **视频动作识别**：请参阅 [Video Swin Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer)。\n\n> **半监督目标检测**：请参阅 [Soft Teacher](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSoftTeacher)。\n\n> **自监督学习：对比学习**：请参阅 [Transformer-SSL](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FTransformer-SSL)。\n\n> **自监督学习：掩码图像建模**：请参阅 [get_started.md#simmim-support](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Fblob\u002Fmain\u002Fget_started.md#simmim-support)。\n\n> **专家混合模型**：更多说明请参阅 [get_started](get_started.md#mixture-of-experts-support)。\n\n> **特征蒸馏**：请参阅 [Feature-Distillation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FFeature-Distillation)。\n\n## 更新\n\n***2022年12月29日***\n\n1. **Nvidia** 的 [FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer\u002Fblob\u002Fmain\u002Fdocs\u002Fswin_guide.md) 现已支持 Swin Transformer V2 推理，在 `T4 和 A100 GPU` 上有显著的速度提升。\n\n***2022年11月30日***\n\n1. **特征蒸馏** 的模型和代码已发布。详情请参阅 [Feature-Distillation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FFeature-Distillation)，其中包含检查点（FD-EsViT-Swin-B、FD-DeiT-ViT-B、FD-DINO-ViT-B、FD-CLIP-ViT-B、FD-CLIP-ViT-L）。\n\n***2022年9月24日***\n\n1. 合并了 [SimMIM](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSimMIM)，这是一种基于 **掩码图像建模** 的预训练方法，适用于 Swin 和 SwinV2（同时也适用于 ViT 和 ResNet）。请参考 [SimMIM 使用指南](get_started.md#simmim-support) 来体验 SimMIM 预训练。\n\n2. 发布了一系列使用 SimMIM 方法预训练的 Swin 和 SwinV2 模型（详见 [MODELHUB 中的 SimMIM](MODELHUB.md#simmim-pretrained-swin-v2-models)），模型规模从 SwinV2-Small-50M 到 SwinV2-giant-1B，数据集大小从 ImageNet-1K-10% 到 ImageNet-22K，迭代次数从 12.5 万到 50 万。您可以利用这些模型来研究 MIM 方法的特性。更多细节请参阅关于 **数据缩放** 的论文：[arxiv.org\u002Fabs\u002F2206.04664](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04664)。\n\n***2022年7月9日***\n\n`新闻`：\n\n1. SwinV2-G 在 ADE20K 语义分割任务上达到 `61.4 mIoU`（比之前的 SwinV2-G 模型高出 1.5 mIoU），采用了额外的 **特征蒸馏 (FD)** 方法，**刷新了该基准的记录**。FD 是一种可以普遍提升多种预训练模型微调性能的方法，包括 DeiT、DINO 和 CLIP 等。尤其值得一提的是，它将 CLIP 预训练的 ViT-L 模型在 ImageNet-1K 图像分类上的准确率提升了 1.6%，达到 `89.0%`，成为 **目前最精确的 ViT-L 模型**。\n2. 合并了来自 **Nvidia** 的一个 PR，该 PR 提供了更快的 Swin Transformer 推理支持，在 `T4 和 A100 GPU` 上有显著的速度提升。\n3. 合并了来自 **Nvidia** 的另一个 PR，新增了一个选项，允许在训练中使用 `纯 FP16 (Apex O2)`，同时几乎保持精度不变。\n\n***2022年6月3日***\n\n1. 新增了 **Swin-MoE**，这是使用 [Tutel](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Ftutel) 实现的 Swin Transformer 混合专家变体（Tutel 是一个优化的混合专家实现）。**Swin-MoE** 被介绍在 [TuTel](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.03382) 论文中。\n\n***2022年5月12日***\n\n1. 发布了 [Swin Transformer V2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09883) 在 ImageNet-1K 和 ImageNet-22K 上的预训练模型。\n2. 发布了 Swin-V1-Tiny 和 Swin-V2-Small 的 ImageNet-22K 预训练模型。\n\n***2022年3月2日***\n\n1. Swin Transformer V2 和 SimMIM 已被 CVPR 2022 接收。[SimMIM](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSimMIM) 是一种基于掩码图像建模的自监督预训练方法，是突破 30 亿参数 Swin V2 模型的关键技术，其使用的标注数据量仅为基于 JFT-3B 的先前十亿级模型的 `40 分之一`。\n\n***2022年2月9日***\n\n1. 集成了 [Huggingface Spaces 🤗](https:\u002F\u002Fhuggingface.co\u002Fspaces) 并使用 [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio) 运行。您可以在 Web Demo 中试用：[![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002FSwin-Transformer)\n\n***2021年10月12日***\n\n1. Swin Transformer 获得了 ICCV 2021 最佳论文奖（Marr 奖）。\n\n***2021年8月9日***\n\n1. [Soft Teacher](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.09018v2.pdf) 将在 ICCV2021 上发表。代码将在 [GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSoftTeacher) 中发布。`Soft Teacher` 是一种端到端的半监督目标检测方法，在 COCO test-dev 上创造了新的记录：`61.3 box AP` 和 `53.0 mask AP`。\n\n***2021年7月3日***\n\n1. 新增了 **Swin MLP**，它是通过将所有多头自注意力 (MHSA) 层替换为 MLP 层对 `Swin Transformer` 的改进版本（更准确地说是分组线性层）。移位窗口结构也能显著提升普通 MLP 架构的性能。\n\n***2021年6月25日***\n\n1. [Video Swin Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.13230) 已发布于 [Video-Swin-Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer)。`Video Swin Transformer` 在一系列视频识别基准测试中达到了最先进的准确率，包括动作识别（在 Kinetics-400 上达到 `84.9` 的 top-1 准确率，在 Kinetics-600 上达到 `86.1` 的 top-1 准确率，且预训练数据量减少了约 `20 倍`，模型尺寸也缩小了约 `3 倍`）以及时序建模（在 Something-Something v2 上达到 `69.6` 的 top-1 准确率）。\n\n***2021年5月12日***\n\n1. 作为 **自监督学习** 的骨干网络：[Transformer-SSL](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FTransformer-SSL)。\n\n使用 Swin-Transformer 作为自监督学习的骨干网络，可以评估所学表征在下游任务中的迁移性能，而这一点在以往使用 ViT\u002FDeiT 的工作中往往缺失，因为这些模型尚未针对下游任务进行充分优化。\n\n***2021年4月12日***\n\n初始提交：\n\n1. 提供了 ImageNet-1K（[Swin-T-IN1K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_tiny_patch4_window7_224.pth)、[Swin-S-IN1K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_small_patch4_window7_224.pth)、[Swin-B-IN1K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224.pth)）和 ImageNet-22K（[Swin-B-IN22K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224_22k.pth)、[Swin-L-IN22K](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window7_224_22k.pth)）的预训练模型。\n2. 提供了支持 ImageNet-1K 图像分类、COCO 目标检测和 ADE20K 语义分割的代码和模型。\n3. 在分支 [LR-Net](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Ftree\u002FLR-Net) 中提供了用于 [局部关系层](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.11491.pdf) 的 CUDA 内核实现。\n\n## 简介\n\n**Swin Transformer**（名称 `Swin` 代表 **S**hifted **win**dow）最初在 [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14030) 中被描述，它能够作为计算机视觉领域的通用骨干网络。它本质上是一种层次化的 Transformer，其表征通过移位窗口计算得出。移位窗口机制通过将自注意力计算限制在不重叠的局部窗口内，同时允许跨窗口连接，从而提高了效率。\n\nSwin Transformer 在 COCO 目标检测任务上表现出色（test-dev 上达到 `58.7 box AP` 和 `51.1 mask AP`），在 ADE20K 语义分割任务上也取得了优异的成绩（val 上达到 `53.5 mIoU`），大幅超越了此前的模型。\n\n![teaser](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_Swin-Transformer_readme_b570a2563714.png)\n\n## 预训练模型在 ImageNet 上的主要结果\n\n**ImageNet-1K 和 ImageNet-22K 预训练的 Swin-V1 模型**\n\n| 名称 | 预训练数据集 | 分辨率 | 精度@1 | 精度@5 | 参数量 | FLOPs | FPS | 22K 模型 | 1K 模型 |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |:---: |\n| Swin-T | ImageNet-1K | 224x224 | 81.2 | 95.5 | 28M | 4.5G | 755 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_tiny_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F156nWJy4Q28rDlrX-rRbI3w)\u002F[config](configs\u002Fswin\u002Fswin_tiny_patch4_window7_224.yaml)\u002F[log](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Ffiles\u002F7745562\u002Flog_swin_tiny_patch4_window7_224.txt) |\n| Swin-S | ImageNet-1K | 224x224 | 83.2 | 96.2 | 50M | 8.7G | 437 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_small_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1KFjpj3Efey3LmtE1QqPeQg)\u002F[config](configs\u002Fswin\u002Fswin_small_patch4_window7_224.yaml)\u002F[log](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Ffiles\u002F7745563\u002Flog_swin_small_patch4_window7_224.txt) |\n| Swin-B | ImageNet-1K | 224x224 | 83.5 | 96.5 | 88M | 15.4G | 278  | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F16bqCTEc70nC_isSsgBSaqQ)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window7_224.yaml)\u002F[log](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Ffiles\u002F7745564\u002Flog_swin_base_patch4_window7_224.txt) |\n| Swin-B | ImageNet-1K | 384x384 | 84.5 | 97.0 | 88M | 47.1G | 85 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window12_384.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xT1cu740-ejW7htUdVLnmw)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window12_384_finetune.yaml) |\n| Swin-T | ImageNet-22K | 224x224 | 80.9 | 96.0 | 28M | 4.5G | 755 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_tiny_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1vct0VYwwQQ8PYkBjwSSBZQ?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_tiny_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_tiny_patch4_window7_224_22kto1k_finetune.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1K0OO-nGZDPkR8fm_r83e8Q?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_tiny_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-S | ImageNet-22K | 224x224 | 83.2 | 97.0 | 50M | 8.7G | 437 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_small_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11NC1xdT5BAGBgazdTme5Sg?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_small_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.8\u002Fswin_small_patch4_window7_224_22kto1k_finetune.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10RFVfjQJhwPfeHrmxQUaLw?pwd=swin)\u002F[config](configs\u002Fswin\u002Fswin_small_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-B | ImageNet-22K | 224x224 | 85.2 | 97.5 | 88M | 15.4G | 278 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1y1Ec3UlrKSI8IMtEs-oBXA)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window7_224_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1n_wNkcbRxVXit8r_KrfAVg)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-B | ImageNet-22K | 384x384 | 86.4 | 98.0 | 88M | 47.1G | 85 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window12_384_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1vwJxnJcVqcLZAw9HaqiR6g) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_base_patch4_window12_384_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1caKTSdoLJYoi4WBcnmWuWg)\u002F[config](configs\u002Fswin\u002Fswin_base_patch4_window12_384_22kto1k_finetune.yaml) |\n| Swin-L | ImageNet-22K | 224x224 | 86.3 | 97.9 | 197M | 34.5G | 141 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window7_224_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pws3rOTFuOebBYP3h6Kx8w)\u002F[config](configs\u002Fswin\u002Fswin_large_patch4_window7_224_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window7_224_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1NkQApMWUhxBGjk1ne6VqBQ)\u002F[config](configs\u002Fswin\u002Fswin_large_patch4_window7_224_22kto1k_finetune.yaml) |\n| Swin-L | ImageNet-22K | 384x384 | 87.3 | 98.2 | 197M | 103.9G | 42 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window12_384_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1sl7o_bJA143OD7UqSLAMoA) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_large_patch4_window12_384_22kto1k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1X0FLHQyPOC6Kmv2CmgxJvA)\u002F[config](configs\u002Fswin\u002Fswin_large_patch4_window12_384_22kto1k_finetune.yaml) |\n\n**ImageNet-1K 和 ImageNet-22K 预训练的 Swin-V2 模型**\n\n| 名称 | 预训练 | 分辨率 | 窗口大小 | top-1准确率 | top-5准确率 | 参数量 | FLOPs | FPS | 22K模型 | 1K模型 |\n|:---------------------:| :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---:|:---: |:---: |\n| SwinV2-T | ImageNet-1K | 256x256 | 8x8 | 81.8 | 95.9 | 28M | 5.9G | 572 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_tiny_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1RzLkAH_5OtfRCJe6Vlg6rg?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_tiny_patch4_window8_256.yaml) |\n| SwinV2-S | ImageNet-1K | 256x256 | 8x8 | 83.7 | 96.6 | 50M | 11.5G | 327 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_small_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F195PdA41szEduW3jEtRSa4Q?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_small_patch4_window8_256.yaml) |\n| SwinV2-B | ImageNet-1K | 256x256 | 8x8 | 84.2 | 96.9 | 88M | 20.3G | 217 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18AfMSz3dPyzIvP1dKuERvQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window8_256.yaml) |\n| SwinV2-T | ImageNet-1K | 256x256 | 16x16 | 82.8 | 96.2 | 28M | 6.6G | 437 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_tiny_patch4_window16_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dyK3cK9Xipmv6RnTtrPocw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_tiny_patch4_window16_256.yaml) |\n| SwinV2-S | ImageNet-1K | 256x256 | 16x16 | 84.1 | 96.8 | 50M | 12.6G  | 257 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_small_patch4_window16_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ZIPiSfWNKTPp821Ka-Mifw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_small_patch4_window16_256.yaml) |\n| SwinV2-B | ImageNet-1K | 256x256 | 16x16 | 84.6 | 97.0 | 88M | 21.8G | 174 | - | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window16_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dlDQGn8BXCmnh7wQSM5Nhw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window16_256.yaml) |\n| SwinV2-B\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 256x256 | 16x16 | 86.2 | 97.9 |  88M | 21.8G | 174 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xc2rsSsRQz_sy5mjgfxrMQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12to16_192to256_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1sgstld4MgGsZxhUAW7MlmQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12to16_192to256_22kto1k_ft.yaml) |\n| SwinV2-B\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 384x384 | 24x24 | 87.1 | 98.2 | 88M | 54.7G | 57  | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Xc2rsSsRQz_sy5mjgfxrMQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_base_patch4_window12to24_192to384_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17u3sEQaUYlvfL195rrORzQ?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_base_patch4_window12to24_192to384_22kto1k_ft.yaml) |\n| SwinV2-L\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 256x256 | 16x16 | 86.9 | 98.0 | 197M | 47.5G | 95  | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11PhCV7qAGXtZ8dXNgyiGOw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12to16_192to256_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pqp31N80qIWjFPbudzB6Bw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12to16_192to256_22kto1k_ft.yaml) |\n| SwinV2-L\u003Csup>\\*\u003C\u002Fsup> | ImageNet-22K | 384x384 | 24x24 | 87.6 | 98.3 | 197M | 115.4G | 33  | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12_192_22k.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F11PhCV7qAGXtZ8dXNgyiGOw?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12_192_22k.yaml) | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv2.0.0\u002Fswinv2_large_patch4_window12to24_192to384_22kto1k_ft.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F13URdNkygr3Xn0N3e6IwjgA?pwd=swin)\u002F[config](configs\u002Fswinv2\u002Fswinv2_large_patch4_window12to24_192to384_22kto1k_ft.yaml) |\n\n注意：\n- SwinV2-B\u003Csup>\\*\u003C\u002Fsup>（SwinV2-L\u003Csup>\\*\u003C\u002Fsup>)在输入分辨率为256x256和384x384时，均是从使用较小输入分辨率192x192的同一预训练模型微调而来。\n- SwinV2-B\u003Csup>\\*\u003C\u002Fsup>（384x384）在ImageNet-1K-V2上的top-1准确率为78.08，而SwinV2-L\u003Csup>\\*\u003C\u002Fsup>（384x384）则达到78.31。\n\n**ImageNet-1K预训练的Swin MLP模型**\n\n| 名称 | 预训练数据集 | 分辨率 | top-1准确率 | top-5准确率 | 参数量 | FLOPs | FPS | 1K模型 |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| [Mixer-B\u002F16](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2105.01601.pdf) | ImageNet-1K | 224x224 | 76.4 | - | 59M | 12.7G | - | [官方仓库](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer) |\n| [ResMLP-S24](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03404) | ImageNet-1K | 224x224 | 79.4 | - | 30M | 6.0G | 715 | [timm](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models) |\n| [ResMLP-B24](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03404) | ImageNet-1K | 224x224 | 81.0 | - | 116M | 23.0G | 231 | [timm](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models) |\n| Swin-T\u002FC24 | ImageNet-1K | 256x256 | 81.6 | 95.7 | 28M | 5.9G | 563 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_tiny_c24_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17k-7l6Sxt7uZ7IV0f26GNQ)\u002F[配置文件](configs\u002Fswin\u002Fswin_tiny_c24_patch4_window8_256.yaml) |\n| SwinMLP-T\u002FC24 | ImageNet-1K | 256x256 | 79.4 | 94.6 | 20M | 4.0G | 807 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_tiny_c24_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1Sa4vP5R0M2RjfIe9HIga-Q)\u002F[配置文件](configs\u002Fswin\u002Fswin_mlp_tiny_c24_patch4_window8_256.yaml) |\n| SwinMLP-T\u002FC12 | ImageNet-1K | 256x256 | 79.6 | 94.7 | 21M | 4.0G | 792 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_tiny_c12_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mM9J2_DEVZHUB5ASIpFl0w)\u002F[配置文件](configs\u002Fswin\u002Fswin_mlp_tiny_c12_patch4_window8_256.yaml) |\n| SwinMLP-T\u002FC6 | ImageNet-1K | 256x256 | 79.7 | 94.9 | 23M | 4.0G | 766 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_tiny_c6_patch4_window8_256.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hUTYVT2W1CsjICw-3W-Vjg)\u002F[配置文件](configs\u002Fswin\u002Fswin_mlp_tiny_c6_patch4_window8_256.yaml) |\n| SwinMLP-B | ImageNet-1K | 224x224 | 81.3 | 95.3 | 61M | 10.4G | 409 | [github](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.5\u002Fswin_mlp_base_patch4_window7_224.pth)\u002F[baidu](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1zww3dnbX3GxNiGfb-GwyUg)\u002F[配置文件](configs\u002Fswin\u002Fswin_mlp_base_patch4_window7_224.yaml) |\n\n注：`baidu`的提取码为`swin`。C24表示每个头有24个通道。\n\n**ImageNet-22K预训练的Swin-MoE模型**\n\n- 请参阅[入门指南](get_started.md#mixture-of-experts-support)，了解如何运行Swin-MoE。\n- Swin-MoE的预训练模型可在[模型中心](MODELHUB.md#imagenet-22k-pretrained-swin-moe-models)中找到。\n\n\n\n## 下游任务的主要结果\n\n**COCO目标检测（2017验证集）**\n\n| 主干网络 | 方法 | 预训练数据集 | 学习率调度 | box mAP | mask mAP | 参数量 | FLOPs |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| Swin-T | Mask R-CNN | ImageNet-1K | 3倍 | 46.0 | 41.6 | 48M | 267G |\n| Swin-S | Mask R-CNN | ImageNet-1K | 3倍 | 48.5 | 43.3 | 69M | 359G |\n| Swin-T | 级联Mask R-CNN | ImageNet-1K | 3倍 | 50.4 | 43.7 | 86M | 745G |\n| Swin-S | 级联Mask R-CNN | ImageNet-1K | 3倍 | 51.9 | 45.0 | 107M | 838G |\n| Swin-B | 级联Mask R-CNN | ImageNet-1K | 3倍 | 51.9 | 45.0 | 145M | 982G |\n| Swin-T | RepPoints V2 | ImageNet-1K | 3倍 | 50.0 | - | 45M | 283G |\n| Swin-T | Mask RepPoints V2 | ImageNet-1K | 3倍 | 50.3 | 43.6 | 47M | 292G |\n| Swin-B | HTC++ | ImageNet-22K | 6倍 | 56.4 | 49.1 | 160M | 1043G |\n| Swin-L | HTC++ | ImageNet-22K | 3倍 | 57.1 | 49.5 | 284M | 1470G |\n| Swin-L | HTC++\u003Csup>*\u003C\u002Fsup> | ImageNet-22K | 3倍 | 58.0 | 50.4 | 284M | - |\n\n注：\u003Csup>*\u003C\u002Fsup>表示多尺度测试。\n\n**ADE20K语义分割（验证集）**\n\n| 主干网络 | 方法 | 预训练数据集 | 裁剪尺寸 | 学习率调度 | mIoU | mIoU (ms+flip) | 参数量 | FLOPs |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| Swin-T | UPerNet | ImageNet-1K | 512x512 | 16万步 | 44.51 | 45.81 | 60M | 945G |\n| Swin-S | UperNet | ImageNet-1K | 512x512 | 16万步 | 47.64 | 49.47 | 81M | 1038G |\n| Swin-B | UperNet | ImageNet-1K | 512x512 | 16万步 | 48.13 | 49.72 | 121M | 1188G |\n| Swin-B | UperNet | ImageNet-22K | 640x640 | 16万步 | 50.04 | 51.66 | 121M | 1841G |\n| Swin-L | UperNet | ImageNet-22K | 640x640 | 16万步 | 52.05 | 53.53 | 234M | 3230G |\n\n## 引用Swin Transformer\n\n```\n@inproceedings{liu2021Swin,\n  title={Swin Transformer: 基于移位窗口的层次化视觉Transformer},\n  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},\n  booktitle={IEEE\u002FCVF国际计算机视觉会议（ICCV）论文集},\n  year={2021}\n}\n```\n\n## 引用局部关系网络（首个全注意力视觉主干网络）\n\n```\n@inproceedings{hu2019local,\n  title={用于图像识别的局部关系网络},\n  author={Hu, Han and Zhang, Zheng and Xie, Zhenda and Lin, Stephen},\n  booktitle={IEEE\u002FCVF国际计算机视觉会议（ICCV）论文集},\n  pages={3464--3473},\n  year={2019}\n}\n```\n\n## 引用Swin Transformer V2\n\n```\n@inproceedings{liu2021swinv2,\n  title={Swin Transformer V2：扩展容量与分辨率},\n  author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},\n  booktitle={国际计算机视觉与模式识别会议（CVPR）},\n  year={2022}\n}\n```\n\n## 引用SimMIM（一种使SwinV2-G成为可能的自监督方法）\n\n```\n@inproceedings{xie2021simmim,\n  title={SimMIM：一个简单的掩码图像建模框架},\n  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han},\n  booktitle={国际计算机视觉与模式识别会议（CVPR）},\n  year={2022}\n}\n```\n\n## 引用SimMIM-数据规模扩展\n\n```\n@article{xie2022data,\n  title={关于掩码图像建模中的数据规模扩展},\n  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Wei, Yixuan and Dai, Qi and Hu, Han},\n  journal={arXiv预印本 arXiv:2206.04664},\n  year={2022}\n}\n```\n\n## 引用Swin-MoE\n\n```\n@misc{hwang2022tutel,\n      title={Tutel：大规模自适应专家混合系统},\n      author={Changho Hwang and Wei Cui and Yifan Xiong and Ziyue Yang and Ze Liu and Han Hu and Zilong Wang and Rafael Salas and Jithin Jose and Prabhat Ram and Joe Chau and Peng Cheng and Fan Yang and Mao Yang and Yongqiang Xiong},\n      year={2022},\n      eprint={2206.03382},\n      archivePrefix={arXiv}\n}\n```\n\n## 入门\n\n- 对于**图像分类**，请参阅[get_started.md](get_started.md)以获取详细说明。\n- 对于**目标检测和实例分割**，请参阅[Swin Transformer用于目标检测](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Object-Detection)。\n- 对于**语义分割**，请参阅[Swin Transformer用于语义分割](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Semantic-Segmentation)。\n- 对于**自监督学习**，请参阅[Transformer-SSL](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FTransformer-SSL)。\n- 对于**视频识别**，请参阅[Video Swin Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer)。\n\n## 第三方使用与实验\n\n***在这一段中，我们交叉链接了使用Swin并报告结果的第三方仓库。您可以通过提交issue告知我们***\n\n（请注意，在您的新仓库中报告准确率数值并提供训练好的模型，以便他人能够了解其正确性和模型行为）\n\n[2022年12月29日] FasterTransformer中实现了Swin Transformers (V2)推理：[FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer\u002Fblob\u002Fmain\u002Fdocs\u002Fswin_guide.md)\n\n[2022年6月30日] FasterTransformer中实现了Swin Transformers (V1)推理：[FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer\u002Fblob\u002Fmain\u002Fdocs\u002Fswin_guide.md)\n\n[2022年5月12日] 在TensorFlow中实现了Swin Transformers (V1)，并移植了预训练参数。实现、TensorFlow权重及代码示例可在[此仓库](https:\u002F\u002Fgithub.com\u002Fsayakpaul\u002Fswin-transformers-tf\u002F)中找到。\n\n[2022年4月6日] 用于音频分类的Swin Transformer：[分层标记语义音频Transformer](https:\u002F\u002Fgithub.com\u002FRetroCirce\u002FHTS-Audio-Transformer)。\n\n[2021年12月21日] 用于StyleGAN的Swin Transformer：[StyleSwin](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FStyleSwin)\n\n[2021年12月13日] 用于人脸识别的Swin Transformer：[FaceX-Zoo](https:\u002F\u002Fgithub.com\u002FJDAI-CV\u002FFaceX-Zoo)\n\n[2021年8月29日] 用于图像修复的Swin Transformer：[SwinIR](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR)\n\n[2021年8月12日] 用于行人重识别的Swin Transformer：[https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch)\n\n[2021年6月29日] PaddleClas中使用了Swin-Transformer，并基于whl包进行推理：[https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleClas](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleClas)\n\n[2021年4月14日] 在Detectron中将Swin用于RetinaNet：https:\u002F\u002Fgithub.com\u002Fxiaohu2015\u002FSwinT_detectron2。\n\n[2021年4月16日] 被纳入著名的模型库：https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models。\n\n[2021年4月20日] 使用TorchServe进行Swin-Transformer分类器推理：https:\u002F\u002Fgithub.com\u002Fkamalkraj\u002FSwin-Transformer-Serve\n\n## 贡献\n\n本项目欢迎贡献和建议。大多数贡献都需要您同意贡献者许可协议（CLA），声明您有权且确实授予我们使用您贡献的权利。有关详情，请访问https:\u002F\u002Fcla.opensource.microsoft.com。\n\n当您提交拉取请求时，CLA机器人会自动确定您是否需要提供CLA，并相应地为PR添加标记（例如状态检查、评论）。只需按照机器人提供的指示操作即可。对于所有使用我们CLA的仓库，您只需执行一次此操作。\n\n本项目已采用[微软开源行为准则](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002F)。更多信息请参阅[行为准则常见问题解答](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002Ffaq\u002F)，或如有任何其他问题或意见，请联系[opencode@microsoft.com](mailto:opencode@microsoft.com)。\n\n## 商标\n\n本项目可能包含项目、产品或服务的商标或徽标。对微软商标或徽标的授权使用须遵守并遵循[微软商标与品牌指南](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Flegal\u002Fintellectualproperty\u002Ftrademarks\u002Fusage\u002Fgeneral)。在本项目的修改版本中使用微软商标或徽标不得造成混淆或暗示微软的赞助。任何第三方商标或徽标的使用均受该第三方政策的约束。","# Swin Transformer 快速上手指南\n\nSwin Transformer 是一种基于移位窗口（Shifted Window）机制的层级式视觉 Transformer，可作为计算机视觉任务的通用骨干网络。本指南将帮助您快速完成环境配置、安装及基础图像分类任务的使用。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 Windows (需配置 WSL2 或兼容环境)\n*   **Python**: 3.7 及以上版本\n*   **PyTorch**: 1.7 及以上版本 (建议 1.8+)\n*   **CUDA**: 支持 CUDA 的 NVIDIA GPU (用于加速训练和推理)\n*   **其他依赖**: `torchvision`, `timm`, `opencv-python`, `scipy`, `yacs`\n\n> **国内加速建议**：\n> 建议使用清华源或阿里源安装 Python 依赖，以提升下载速度。\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 2. 安装步骤\n\n### 2.1 克隆代码库\n首先从 GitHub 克隆官方仓库：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer.git\ncd Swin-Transformer\n```\n\n### 2.2 安装依赖\n安装项目所需的 Python 包：\n```bash\npip install timm==0.4.12 opencv-python==4.5.3.56 scipy==1.7.0 yacs==0.1.8\n```\n*注：若需使用特定版本的 PyTorch，请参考 [PyTorch 官网](https:\u002F\u002Fpytorch.org\u002F) 进行安装。*\n\n### 2.3 编译 CUDA 扩展 (可选但推荐)\n为了获得最佳的训练和推理速度，建议编译本地的 CUDA 算子（如 Shifted Window Attention）：\n```bash\npython setup.py build_ext --inplace\n```\n*如果您的环境没有可用的 GPU 或 CUDA 工具链，可以跳过此步，模型将使用纯 PyTorch 实现运行，但速度会稍慢。*\n\n## 3. 基本使用\n\n以下示例展示如何加载预训练模型并进行单张图像的推理（图像分类任务）。\n\n### 3.1 下载预训练模型\n您可以从官方 Release 页面或国内镜像下载预训练权重。以 **Swin-Tiny** (ImageNet-1K 预训练) 为例：\n\n*   **GitHub 下载**: [swin_tiny_patch4_window7_224.pth](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002Fstorage\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Fswin_tiny_patch4_window7_224.pth)\n*   **百度网盘下载**: [链接](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F156nWJy4Q28rDlrX-rRbI3w) (提取码见原仓库文档)\n\n将下载的文件保存为 `swin_tiny_patch4_window7_224.pth` 并放在项目根目录。\n\n### 3.2 运行推理示例\n创建一个名为 `demo_inference.py` 的文件，写入以下代码：\n\n```python\nimport torch\nfrom torchvision import transforms, datasets\nfrom models import build_model\nfrom config import get_config\nimport cv2\nimport numpy as np\n\ndef load_image(image_path):\n    img = cv2.imread(image_path)\n    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n    return img\n\ndef main():\n    # 1. 配置加载\n    config = get_config()\n    # 手动指定配置文件路径 (对应 Swin-Tiny 224x224)\n    config.defrost()\n    config.MODEL.TYPE = \"swin\"\n    config.MODEL.NAME = \"swin_tiny_patch4_window7_224\"\n    config.MODEL.SWIN.PATCH_SIZE = 4\n    config.MODEL.SWIN.WINDOW_SIZE = 7\n    config.MODEL.SWIN.DEPTHS = [2, 2, 6, 2]\n    config.MODEL.SWIN.NUM_HEADS = [3, 6, 12, 24]\n    config.DATA.IMG_SIZE = 224\n    config.freeze()\n\n    # 2. 构建模型\n    model = build_model(config)\n    \n    # 3. 加载预训练权重\n    checkpoint = torch.load('swin_tiny_patch4_window7_224.pth', map_location='cpu')\n    model.load_state_dict(checkpoint['model'], strict=True)\n    model.eval()\n    \n    # 移至 GPU (如果可用)\n    if torch.cuda.is_available():\n        model = model.cuda()\n\n    # 4. 数据预处理\n    transform = transforms.Compose([\n        transforms.ToTensor(),\n        transforms.Resize((224, 224)),\n        transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n    ])\n\n    # 5. 推理\n    # 请替换为您本地的图片路径\n    image_path = \"your_image.jpg\" \n    if not os.path.exists(image_path):\n        print(f\"未找到图片 {image_path}, 请创建测试图片或修改路径。\")\n        return\n\n    img = load_image(image_path)\n    img_tensor = transform(img).unsqueeze(0)\n    \n    if torch.cuda.is_available():\n        img_tensor = img_tensor.cuda()\n\n    with torch.no_grad():\n        output = model(img_tensor)\n        prediction = torch.softmax(output, dim=1)\n        confidence, predicted_class = torch.max(prediction, 1)\n\n    print(f\"预测类别索引：{predicted_class.item()}\")\n    print(f\"置信度：{confidence.item():.4f}\")\n\nif __name__ == '__main__':\n    import os\n    main()\n```\n\n### 3.3 执行命令\n在终端运行上述脚本：\n```bash\npython demo_inference.py\n```\n\n---\n**后续任务指引**：\n*   **目标检测\u002F实例分割**: 请跳转至 [Swin-Transformer-Object-Detection](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Object-Detection) 仓库。\n*   **语义分割**: 请跳转至 [Swin-Transformer-Semantic-Segmentation](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FSwin-Transformer-Semantic-Segmentation) 仓库。\n*   **视频动作识别**: 请跳转至 [Video-Swin-Transformer](https:\u002F\u002Fgithub.com\u002FSwinTransformer\u002FVideo-Swin-Transformer) 仓库。","某自动驾驶初创公司的算法团队正在开发夜间复杂路况下的实时障碍物检测系统，急需提升模型对模糊小目标的识别精度。\n\n### 没有 Swin-Transformer 时\n- **多尺度目标漏检严重**：传统 CNN 或早期 Vision Transformer 难以兼顾远近车辆，导致远处小轿车和近处行人的检测率低下。\n- **局部细节丢失**：在处理低光照噪点图像时，固定感受野的卷积操作容易忽略关键纹理特征，将阴影误判为障碍物。\n- **推理延迟过高**：为了覆盖不同尺寸的目标，不得不堆叠多个尺度的检测头，导致显存占用大，无法满足车载芯片的实时性要求。\n- **迁移训练成本高**：在自有小规模路测数据上微调时，模型极易过拟合，泛化能力差，需耗费数周收集更多标注数据。\n\n### 使用 Swin-Transformer 后\n- **层级化特征精准捕捉**：利用移位窗口机制构建的层级结构，Swin-Transformer 能自适应地提取从局部车轮到整体车身的多尺度特征，显著降低漏检率。\n- **长距离依赖建模增强**：通过移动窗口建立跨区域连接，模型能有效区分夜间阴影与真实障碍物，大幅减少误报。\n- **计算效率显著提升**：线性复杂度的设计使得在保持高精度的同时，推理速度在 T4 或 A100 显卡上获得加速，满足实时帧率需求。\n- **小样本泛化能力强**：借助其在 ImageNet-22K 等大数据集上的强大预训练权重，仅需少量路测数据微调即可达到优异效果，缩短研发周期。\n\nSwin-Transformer 通过创新的移位窗口机制，完美平衡了视觉任务中的精度与效率，成为解决复杂场景感知难题的关键引擎。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_Swin-Transformer_30e311ef.png","microsoft","Microsoft","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmicrosoft_4900709c.png","Open source projects and samples from Microsoft",null,"opensource@microsoft.com","OpenAtMicrosoft","https:\u002F\u002Fopensource.microsoft.com","https:\u002F\u002Fgithub.com\u002Fmicrosoft",[86,90,94],{"name":87,"color":88,"percentage":89},"Python","#3572A5",95.4,{"name":91,"color":92,"percentage":93},"Cuda","#3A4E3A",3.4,{"name":95,"color":96,"percentage":97},"C++","#f34b7d",1.3,15825,2219,"2026-04-04T09:08:18","MIT","未说明","训练和推理强烈建议使用 NVIDIA GPU。文中明确提及 T4 和 A100 GPU 可获得显著的速度提升（通过 FasterTransformer）。支持纯 FP16 (Apex O2) 训练。具体显存大小取决于模型版本（从 50M 到 1B 参数不等），大模型（如 SwinV2-Giant）需要高显存 GPU。",{"notes":105,"python":102,"dependencies":106},"该仓库是 Swin Transformer 及其后续研究（如 SwinV2, SimMIM, Feature Distillation）的官方实现。代码库本身主要提供图像分类功能，目标检测、语义分割、视频动作识别等任务需跳转到对应的子仓库。支持多种预训练模型（ImageNet-1K\u002F22K），参数量范围从 50M 到 1B。NVIDIA 提供的 FasterTransformer 可大幅加速 T4 和 A100 上的推理。Swin-MoE 变体需要安装 Tutel 库。",[107,108,109,110,111,112],"torch","torchvision","timm","apex (可选，用于 FP16 训练)","tutel (用于 Swin-MoE 混合专家模型)","FasterTransformer (可选，用于 NVIDIA GPU 加速推理)",[26,14],[115,116,117,118,119,120,121,122],"swin-transformer","image-classification","object-detection","semantic-segmentation","imagenet","mscoco","ade20k","mask-rcnn","2026-03-27T02:49:30.150509","2026-04-06T08:27:34.998938",[126,131,136,141,146,150],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},15411,"Swin Transformer 中的循环移位（cyclic shift）和掩码（mask）是如何工作的？它们如何影响注意力计算？","循环移位操作会将不同语义区域的特征组合到同一个窗口中。例如，移位后窗口内可能包含原本属于不同区域的块（如区域 4, 5, 7, 8）。为了防止这些不同区域之间进行错误的注意力交互，代码使用了掩码机制。在计算 (N,N) 点积矩阵时，掩码会将不同区域位置之间的注意力权重设置为一个极小的值（如 -100.0），经过 Softmax 后这些位置的权重接近于 0。这样确保了注意力计算仅在每个独立的语义区域内进行，避免了跨区域的无效信息融合。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Fissues\u002F52",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},15412,"为什么在计算注意力时是将 attn_mask 加到注意力分数上，而不是相乘？","这是利用 Softmax 函数的特性来实现掩码效果。注意力分数加上掩码值（例如 0 或 -100.0）后，再进行 Softmax 运算。对于需要屏蔽的位置，加上 -100.0 后，该位置的数值会变得非常小，经过 Softmax 后其概率值趋近于 0，从而 effectively 忽略该位置的注意力权重。这相当于在逻辑上“屏蔽”了这些连接，而不是通过乘法直接将权重置零。公式示例：[0.5, 0.7, 0.3] + [0, -100, 0] = [0.5, -99.3, 0.3]，Softmax 后会忽略中间的值。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Fissues\u002F38",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},15413,"在 ImageNet 上评估模型时，如果准确率（Acc@1）显示为 0，可能是什么原因？","这通常是因为验证集数据加载路径或格式配置错误，导致模型无法正确读取标签或图像。常见解决方法是检查是否使用了正确的验证集预处理脚本。例如，可以使用官方提供的脚本重新构建验证集数据结构：\n```bash\nwget https:\u002F\u002Fraw.githubusercontent.com\u002Fsoumith\u002Fimagenetloader.torch\u002Fmaster\u002Fvalprep.sh\nbash valprep.sh\n```\n确保验证集文件夹结构与训练集一致（即每个类别一个子文件夹），并且代码中读取的是目录结构而非单纯的文本列表（除非特别配置）。修正数据加载路径后，准确率应恢复正常。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Fissues\u002F18",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},15414,"运行分布式训练时报错 'ValueError: Error initializing torch.distributed using env:\u002F\u002F rendezvous: environment variable RANK expected, but not set' 如何解决？","该错误表明环境变量 `RANK` 未设置，这是使用 `env:\u002F\u002F` 初始化分布式进程组所必需的。通常需要使用 `torch.distributed.launch` 或 `torchrun` 来启动脚本，它们会自动设置 `RANK`、`WORLD_SIZE` 等环境变量。例如：\n```bash\npython -m torch.distributed.launch --nproc_per_node=4 main.py --batch-size 64 ...\n```\n或者在新版 PyTorch 中使用：\n```bash\ntorchrun --nproc_per_node=4 main.py --batch-size 64 ...\n```\n不要直接通过 `python main.py` 启动分布式任务。此外，部分用户反馈降低 GCC 版本（如降至 6.5）也可能解决某些环境下的兼容性问题。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer\u002Fissues\u002F17",{"id":147,"question_zh":148,"answer_zh":149,"source_url":135},15415,"如何理解 Swin Transformer 中移位窗口注意力机制生成的掩码矩阵形状和内容？","掩码是通过将图像划分为多个切片（slices）并标记不同区域生成的。代码首先创建一个与输入图像大小相同的掩码图（img_mask），根据窗口大小（window_size）和移位大小（shift_size）将其划分为 3x3 个区域（左上、中上、右上等），每个区域赋予不同的整数值。然后将该掩码图划分为窗口，计算窗口内每个像素对的差值矩阵（attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)）。如果两个像素属于同一区域，差值为 0；否则差值非 0。最后将非 0 位置填充为 -100.0，0 位置保持为 0.0。这样生成的掩码矩阵确保了只有同一原始区域内的像素才能相互关注。",{"id":151,"question_zh":152,"answer_zh":153,"source_url":130},15416,"Swin Transformer 相比传统 CNN 或其他 Transformer 架构有哪些主要优势？","Swin Transformer 引入了移位窗口机制，既保留了 Transformer 建模长距离依赖的能力，又通过局部窗口计算降低了计算复杂度（从图像大小的二次方降为线性）。此外，它具有层次化结构，能够像 CNN 一样提取多尺度特征，适用于密集预测任务（如检测、分割）。微软研究院文章总结了拥抱 Transformer 的五大理由，包括强大的全局建模能力、灵活的架构设计等，而 Swin 进一步通过局部性和移位窗口解决了视觉任务中的尺度变化和计算效率问题。",[]]