[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-microsoft--FocalNet":3,"tool-microsoft--FocalNet":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,2,"2026-04-18T11:18:24",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":10,"env_os":99,"env_gpu":100,"env_ram":99,"env_deps":101,"category_tags":105,"github_topics":77,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":106,"updated_at":107,"faqs":108,"releases":148},9193,"microsoft\u002FFocalNet","FocalNet","[NeurIPS 2022] Official code for \"Focal Modulation Networks\"","FocalNet 是一种基于“焦点调制”机制的新型深度学习架构，旨在为计算机视觉任务提供更高效、强大的特征提取能力。它主要解决了传统卷积神经网络感受野固定，以及 Transformer 架构中自注意力机制计算成本过高的问题。通过引入焦点调制模块，FocalNet 能够动态地聚合不同范围的上下文信息，既保留了卷积的计算效率，又具备了类似注意力机制的全局建模能力。\n\n该工具特别适合计算机视觉领域的研究人员和开发者使用，尤其是那些从事目标检测、语义分割、全景分割或医学图像分析的专业人士。FocalNet 已在 COCO 和 ADE20K 等权威基准测试中取得了领先的性能表现，并拥有从小型到超大型的多种预训练模型供直接调用或微调。其独特的技术亮点在于摒弃了昂贵的自注意力计算，转而采用层级化的焦点调制策略，实现了精度与速度的优异平衡。此外，FocalNet 生态丰富，不仅支持 PyTorch 和 Keras 框架，还衍生出了针对地球系统分析和医疗影像的专用变体，是探索“超越注意力”网络设计的优秀开源选择。","# [Focal Modulation Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11926)\n\nThis is the official Pytorch implementation of FocalNets:\n\n[\"**Focal Modulation Networks**\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11926) by [Jianwei Yang](https:\u002F\u002Fjwyang.github.io\u002F), [Chunyuan Li](https:\u002F\u002Fchunyuan.li\u002F), [Xiyang Dai](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fxiyangdai\u002F), [Lu Yuan](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=k9TsUVsAAAAJ&hl=en) and [Jianfeng Gao](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fpeople\u002Fjfgao\u002F?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Fjfgao%2F).\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fa-strong-and-reproducible-object-detector\u002Fobject-detection-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco-minival?p=a-strong-and-reproducible-object-detector) \n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fa-strong-and-reproducible-object-detector\u002Fobject-detection-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco?p=a-strong-and-reproducible-object-detector)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Ffocal-modulation-networks\u002Fpanoptic-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-coco-minival?p=focal-modulation-networks)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Ffocal-modulation-networks\u002Fsemantic-segmentation-on-ade20k)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-ade20k?p=focal-modulation-networks)\n\n## News\n* [11\u002F07\u002F2023] Researchers showed that Focal-UNet beats previous methods on several earth system analysis benchmarks. Check out their [code](https:\u002F\u002Fgithub.com\u002FHakamShams\u002FFocal_TSMP), [paper](https:\u002F\u002Fegusphere.copernicus.org\u002Fpreprints\u002F2023\u002Fegusphere-2023-2422\u002F), and [project](https:\u002F\u002Fhakamshams.github.io\u002FFocal-TSMP\u002F)!\n* [06\u002F30\u002F2023] :collision: Please find FocalNet-DINO checkpoints from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft). The old links are deprecated.\n* [04\u002F26\u002F2023] By combining with [FocalNet-Huge](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet) backbone, Focal-Stable-DINO achieves **64.8 AP** on COCO test-dev *without* any test time augmentation! Check our [Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13027) for more details!\n* [02\u002F13\u002F2023] FocalNet has been integrated to [Keras](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-io), check out the [tutorial](https:\u002F\u002Fkeras.io\u002Fexamples\u002Fvision\u002Ffocal_modulation_network\u002F)!\n* [01\u002F18\u002F2023] Checkout a curated paper list which introduce [networks beyond attention](https:\u002F\u002Fgithub.com\u002FFocalNet\u002FNetworks-Beyond-Attention) based on modern convolution and modulation!\n* [01\u002F01\u002F2023] Researchers showed that Focal-UNet beats Swin-UNet on several medical image segmentation benchmarks. Check out their [code](https:\u002F\u002Fgithub.com\u002Fgivkashi\u002FFocal-Unet) and [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09263), and happy new year!\n* [12\u002F16\u002F2022] :collision: We are pleased to release our [FocalNet-Large-DINO checkpoint](https:\u002F\u002Fgithub.com\u002FFocalNet\u002FFocalNet-DINO#model-zoos) pretrained on Object365 and finetuned on COCO, which help to get **63.5** mAP without tta on COCO minival! Check it out!\n* [11\u002F14\u002F2022] We created a new repo [FocalNet-DINO](https:\u002F\u002Fgithub.com\u002FFocalNet\u002FFocalNet-DINO) to hold the code to reproduce the object detection performance with DINO. We will be releasing the object detection code and checkpoints there. Stay tunned!\n* [11\u002F13\u002F2022] :collision: We release our [large, xlarge and huge models](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet#imagenet-22k-pretrained) pretrained on ImageNet-22K, including the one we used to achieve the SoTA on COCO object detection leaderboard!\n* [11\u002F02\u002F2022] We wrote a [blog post](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fgroup\u002Fdeep-learning-group\u002Farticles\u002Ffocalnets-focusing-the-eyes-with-focal-modulation\u002F) to introduce the insights and techniques behind our FocalNets in a plain way, check it out!\n* [10\u002F31\u002F2022] :collision: We achieved new SoTA with 64.2 box mAP on [COCO minival](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco-minival) and \u003Cstrike>64.3\u003C\u002Fstrike> **64.4** box mAP on [COCO test-dev](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco) based on the powerful OD method [DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FDINO)! We used huge model size (700M) beating much larger attention-based models like SwinV2-G and BEIT-3. Checkoout our [new version](.\u002FFocalNet_NeurIPS2022_extension.pdf) and stay tuned!\n* [09\u002F20\u002F2022] Our FocalNet has been accepted by NeurIPS 2022!\n* [04\u002F02\u002F2022] Create a [gradio demo in huggingface space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjw2yang\u002Ffocalnet-modulators) to visualize the modulation mechanism. Check it out!\n\n## Introduction\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_5ca3123ae74b.png\" width=95% height=95% \nclass=\"center\">\n\u003C\u002Fp>\n\nWe propose **FocalNets: Focal Modulation Networks**, an **attention-free** architecture that achieves superior performance than SoTA self-attention (SA) methods across various vision benchmarks. SA is an first interaction, last aggregation (FILA) process as shown above. Our Focal Modulation inverts the process by first aggregating, last interaction (FALI). This inversion brings several merits:\n\n* **Translation-Invariance**: It is performed for each target token with the context centered around it.\n* **Explicit input-dependency**: The *modulator* is computed by aggregating the short- and long-rage context from the input and then applied to the target token.\n* **Spatial- and channel-specific**: It first aggregates the context spatial-wise and then channel-wise, followed by an element-wise modulation.\n* **Decoupled feature granularity**: Query token preserves the invidual information at finest level, while coarser context is extracted surrounding it. They two are decoupled but connected through the modulation operation.\n* **Easy to implement**: We can implement both context aggregation and interaction in a very simple and light-weight way. It does not need softmax, multiple attention heads, feature map rolling or unfolding, etc.\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_c0e0a1e19c46.png\" width=80% height=80% \nclass=\"center\">\n\u003C\u002Fp>\n\nBefore getting started, see what our FocalNets have learned to perceive images and where to modulate!\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_094c496ebb57.png\" width=90% class=\"center\">\n\u003C\u002Fp>\n\nFinally, FocalNets are built with convolutional and linear layers, but goes beyond by proposing a new modulation mechanism that is simple, generic, effective and efficient. We hereby recommend: \n\n**Focal-Modulation May be What We Need for Visual Modeling!**\n\n## Getting Started\n\n* Please follow [get_started_for_image_classification](.\u002Fclassification) to get started for image classification.\n* Please follow [get_started_for_object_detection](.\u002Fdetection) to get started for object detection.\n* Please follow [get_started_for_semantic_segmentation](.\u002Fsegmentation) to get started for semantic segmentation.\n\n## Benchmarking\n\n### Image Classification on [ImageNet-1K](https:\u002F\u002Fwww.image-net.org\u002F)\n\n* Strict comparison with multi-scale Swin and Focal Transformers:\n\n| Model | Depth | Dim | Kernels | #Params. (M) | FLOPs (G) | Throughput (imgs\u002Fs) | Top-1 | Download\n| :----: | :---: | :---: | :---: | :---: | :--: | :---: | :---: |:---: | \n| FocalNet-T | [2,2,6,2] |96 | [3,5] | 28.4 | 4.4 | 743 | 82.1 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_tiny_srf.pth)\u002F[config](configs\u002Ffocalnet_tiny_srf.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_tiny_srf.txt)\n| FocalNet-T | [2,2,6,2] | 96 | [3,5,7] | 28.6 | 4.5 | 696 | 82.3 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_tiny_lrf.pth)\u002F[config](configs\u002Ffocalnet_tiny_lrf.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_tiny_lrf.txt)\n| FocalNet-S | [2,2,18,2] | 96 | [3,5] | 49.9 | 8.6 | 434 | 83.4 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_small_srf.pth)\u002F[config](configs\u002Ffocalnet_small_srf.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_small_srf.txt)\n| FocalNet-S | [2,2,18,2] | 96 | [3,5,7] | 50.3 | 8.7 | 406 | 83.5 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_small_lrf.pth)\u002F[config](configs\u002Ffocalnet_small_lrf.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_small_lrf.txt)\n| FocalNet-B | [2,2,18,2] | 128 | [3,5] | 88.1 | 15.3 | 280 | 83.7 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_base_srf.pth)\u002F[config](configs\u002Ffocalnet_base_srf.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_base_srf.txt)\n| FocalNet-B | [2,2,18,2] | 128 | [3,5,7] | 88.7 | 15.4 | 269 | 83.9 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_base_lrf.pth)\u002F[config](configs\u002Ffocalnet_base_lrf.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_base_lrf.txt)\n\n* Strict comparison with isotropic ViT models:\n\n| Model | Depth | Dim | Kernels | #Params. (M) | FLOPs (G) | Throughput (imgs\u002Fs) | Top-1 | Download\n| :----: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |\n| FocalNet-T | 12 | 192 | [3,5,7] | 5.9 | 1.1 | 2334 | 74.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_tiny_iso_16.pth)\u002F[config](configs\u002Ffocalnet_tiny_iso.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_tiny_iso.txt)\n| FocalNet-S | 12 | 384 | [3,5,7] | 22.4 | 4.3 | 920 | 80.9 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_small_iso_16.pth)\u002F[config](configs\u002Ffocalnet_small_iso.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_small_iso.txt)\n| FocalNet-B | 12 | 768 | [3,5,7] | 87.2 | 16.9 | 300 | 82.4 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_base_iso_16.pth)\u002F[config](configs\u002Ffocalnet_base_iso.yaml)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_base_iso.txt)\n\n\n### ImageNet-22K Pretraining\n\n| Model | Depth | Dim | Kernels | #Params. (M) | Download\n| :----: | :---: | :---: | :---: | :---: | :--: | \n| FocalNet-L | [2,2,18,2] | 192 | [5,7,9] | 207 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_large_lrf_384.pth)\u002F[config](configs\u002Ffocalnet_large_fl3.yaml)\n| FocalNet-L | [2,2,18,2] | 192 | [3,5,7,9] | 207 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_large_lrf_384_fl4.pth)\u002F[config](configs\u002Ffocalnet_large_fl4.yaml)\n| FocalNet-XL | [2,2,18,2] | 256 | [5,7,9] | 366 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_xlarge_lrf_384.pth)\u002F[config](configs\u002Ffocalnet_xlarge_fl3.yaml)\n| FocalNet-XL | [2,2,18,2] | 256 | [3,5,7,9] | 366 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_xlarge_lrf_384_fl4.pth)\u002F[config](configs\u002Ffocalnet_xlarge_fl4.yaml)\n| FocalNet-H | [2,2,18,2] | 352 | [3,5,7] | 687 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_huge_lrf_224.pth)\u002F[config](configs\u002Ffocalnet_huge_fl3.yaml)\n| FocalNet-H | [2,2,18,2] | 352 | [3,5,7,9] | 689 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_huge_lrf_224_fl4.pth)\u002F[config](configs\u002Ffocalnet_huge_fl4.yaml)\n\n**NOTE**: We reorder the class names in imagenet-22k so that we can directly use the first 1k logits for evaluating on imagenet-1k. We remind that the 851th class (label=850) in imagenet-1k is missed in imagenet-22k. Please refer to this [labelmap](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fblob\u002Fmain\u002Flabelmap_22k_reorder.txt). More discussion found in this [issue](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F27).\n\n### Object Detection on [COCO](https:\u002F\u002Fcocodataset.org\u002F#home)\n\n* [Mask R-CNN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FHe_Mask_R-CNN_ICCV_2017_paper.pdf)\n\n| Backbone   | Kernels   | Lr Schd | #Params. (M) | FLOPs (G) | box mAP | mask mAP | Download \n| :---: | :---: | :---:  | :---:   | :---: | :---: | :---: | :---: |\n| FocalNet-T | [9,11]    | 1x      | 48.6 | 267 | 45.9 | 41.3 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_srf_maskrcnn_1x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_1x_coco_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_srf_maskrcnn_1x.json)\n| FocalNet-T | [9,11]    | 3x      | 48.6 | 267 | 47.6 | 42.6 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_srf_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_srf_maskrcnn_3x.json)\n| FocalNet-T | [9,11,13] | 1x      | 48.8 | 268 | 46.1 | 41.5 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_maskrcnn_1x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_1x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_maskrcnn_1x.json)\n| FocalNet-T | [9,11,13] | 3x      | 48.8 | 268 | 48.0 | 42.9 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_maskrcnn_3x.json)\n| FocalNet-S | [9,11]    | 1x      | 70.8  | 356 | 48.0 | 42.7 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_srf_maskrcnn_1x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_1x_coco_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_srf_maskrcnn_1x.json)\n| FocalNet-S | [9,11]    | 3x      | 70.8  | 356 | 48.9 | 43.6 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_srf_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_srf_maskrcnn_3x.json)\n| FocalNet-S | [9,11,13] | 1x      | 72.3  | 365 | 48.3 | 43.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_lrf_maskrcnn_1x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_1x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_lrf_maskrcnn_1x.json)\n| FocalNet-S | [9,11,13] | 3x      | 72.3  | 365 | 49.3 | 43.8 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_lrf_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_lrf_maskrcnn_3x.json)\n| FocalNet-B | [9,11]    | 1x      | 109.4 | 496 | 48.8 | 43.3 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_srf_maskrcnn_1x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_1x_coco_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_srf_maskrcnn_1x.json)\n| FocalNet-B | [9,11]    | 3x      | 109.4 | 496 | 49.6 | 44.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_srf_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_3x_coco_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_srf_maskrcnn_3x.json)\n| FocalNet-B | [9,11,13] | 1x      | 111.4 | 507 | 49.0 | 43.5 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_lrf_maskrcnn_1x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_1x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_lrf_maskrcnn_1x.json)\n| FocalNet-B | [9,11,13] | 3x      | 111.4 | 507 | 49.8 | 44.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_lrf_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_lrf_maskrcnn_3x.json)\n\n* Other detection methods\n\n| Backbone | Kernels | Method | Lr Schd | #Params. (M) | FLOPs (G) | box mAP | Download \n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| FocalNet-T | [11,9,9,7] | [Cascade Mask R-CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00726) | 3x | 87.1  | 751 | 51.5 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_cascade_maskrcnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fcascade_mask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_cascade_maskrcnn_3x.json)\n| FocalNet-T | [11,9,9,7] | [ATSS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.02424.pdf)           | 3x | 37.2  | 220 | 49.6 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_atss_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fatss_focalnet_tiny_patch4_fpn_3x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_atss_3x.json)\n| FocalNet-T | [11,9,9,7] | [Sparse R-CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.12450.pdf)   | 3x | 111.2 | 178 | 49.9 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_sparsercnn_3x.pth)\u002F[config](detection\u002Fconfigs\u002Ffocalnet\u002Fsparse_rcnn_focalnet_tiny_fpn_300_proposals_crop_mstrain_480-800_3x_coco_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_sparsercnn_3x.json)\n\n### Semantic Segmentation on [ADE20K](https:\u002F\u002Fgroups.csail.mit.edu\u002Fvision\u002Fdatasets\u002FADE20K\u002F)\n\n* Resolution 512x512 and Iters 160k\n\n| Backbone | Kernels  | Method | #Params. (M) | FLOPs (G) | mIoU | mIoU (MS) |  Download \n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| FocalNet-T | [9,11] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 61  | 944 | 46.5 | 47.2 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_tiny_srf_upernet_160k.pth)\u002F[config](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_tiny_patch4_512x512_160k_ade20k_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_tiny_srf_upernet_160k.json)\n| FocalNet-T | [9,11,13] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf)  | 61  | 949 | 46.8 | 47.8 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_tiny_lrf_upernet_160k.pth)\u002F[config](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_tiny_patch4_512x512_160k_ade20k_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_tiny_lrf_upernet_160k.json)\n| FocalNet-S | [9,11] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 83 | 1035 | 49.3 | 50.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_small_srf_upernet_160k.pth)\u002F[config](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_small_patch4_512x512_160k_ade20k_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_small_srf_upernet_160k.json)\n| FocalNet-S | [9,11,13] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf)  | 84 | 1044 | 49.1 | 50.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_small_lrf_upernet_160k.pth)\u002F[config](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_small_patch4_512x512_160k_ade20k_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_small_lrf_upernet_160k.json)\n| FocalNet-B | [9,11] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 124 | 1180 | 50.2 | 51.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_base_srf_upernet_160k.pth)\u002F[config](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_base_patch4_512x512_160k_ade20k_srf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_base_srf_upernet_160k.json)\n| FocalNet-B | [9,11,13] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 126 | 1192 | 50.5 | 51.4 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_base_lrf_upernet_160k.pth)\u002F[config](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_base_patch4_512x512_160k_ade20k_lrf.py)\u002F[log](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_base_lrf_upernet_160k.json)\n\n## Visualizations\n\nThere are three steps in our FocalNets: \n 1. Contexualization with depth-wise conv; \n 2. Multi-scale aggregation with gating mechanism; \n 3. Modulator derived from context aggregation and projection. \n\nWe visualize them one by one.\n\n* **Depth-wise convolution kernels** learned in FocalNets:\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_eb2f7ed01c31.png\" width=70% height=70% class=\"center\">\n\u003C\u002Fp>\n\nYellow colors represent higher values. Apparently, FocalNets learn to gather more local context at earlier stages while more global context at later stages.\n\n* **Gating maps** at last layer of FocalNets for different input images:\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_5d8b7925d815.png\" width=70% height=70% class=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_fa884c77751d.png\" width=70% height=70% class=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_121d7d69c2b8.png\" width=70% height=70% class=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_5d11d326d217.png\" width=70% height=70% class=\"center\">\n\u003C\u002Fp>\n\nFrom left to right, the images are input image, gating map for focal level 1,2,3 and the global context. Clearly, our model has learned where to gather the context depending on the visual contents at different locations.\n\n* **Modulator** learned in FocalNets for different input images:\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_f127495ed1fc.png\" width=70% height=70% class=\"center\">\n\u003C\u002Fp>\n\nThe modulator derived from our model automatically learns to focus on the foreground regions.\n\nFor visualization by your own, please refer to [visualization notebook](tools\u002Fvisualize.ipynb).\n\n## Citation\n\nIf you find this repo useful to your project, please consider to cite it with following bib:\n\n    @misc{yang2022focal,\n          title={Focal Modulation Networks}, \n          author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Jianfeng Gao},\n          journal={Advances in Neural Information Processing Systems (NeurIPS)},\n          year={2022}\n    }\n\n## Acknowledgement\n\nOur codebase is built based on [Swin Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer) and [Focal Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocal-Transformer). To achieve the SoTA object detection performance, we heavily rely on the most advanced method [DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FDINO) and the advices from the authors. We thank the authors for the nicely organized code!\n\n## Contributing\n\nThis project welcomes contributions and suggestions.  Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https:\u002F\u002Fcla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002F).\nFor more information see the [Code of Conduct FAQ](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002Ffaq\u002F) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Flegal\u002Fintellectualproperty\u002Ftrademarks\u002Fusage\u002Fgeneral).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n","# [焦点调制网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11926)\n\n这是FocalNets的官方PyTorch实现：\n\n[\"**焦点调制网络**\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11926) 由 [杨建伟](https:\u002F\u002Fjwyang.github.io\u002F)、[李春元](https:\u002F\u002Fchunyuan.li\u002F)、[戴希阳](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fxiyangdai\u002F)、[卢源](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=k9TsUVsAAAAJ&hl=en) 和 [高剑锋](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fpeople\u002Fjfgao\u002F?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Fjfgao%2F) 共同提出。\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fa-strong-and-reproducible-object-detector\u002Fobject-detection-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco-minival?p=a-strong-and-reproducible-object-detector) \n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fa-strong-and-reproducible-object-detector\u002Fobject-detection-on-coco)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco?p=a-strong-and-reproducible-object-detector)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Ffocal-modulation-networks\u002Fpanoptic-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-coco-minival?p=focal-modulation-networks)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Ffocal-modulation-networks\u002Fsemantic-segmentation-on-ade20k)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-ade20k?p=focal-modulation-networks)\n\n## 新闻\n* [2023年11月7日] 研究人员表明，Focal-UNet在多个地球系统分析基准测试中超越了先前的方法。请查看他们的[代码](https:\u002F\u002Fgithub.com\u002FHakamShams\u002FFocal_TSMP)、[论文](https:\u002F\u002Fegusphere.copernicus.org\u002Fpreprints\u002F2023\u002Fegusphere-2023-2422\u002F)和[项目](https:\u002F\u002Fhakamshams.github.io\u002FFocal-TSMP\u002F)！\n* [2023年6月30日] :collision: 请从[huggingface](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft)获取FocalNet-DINO检查点。旧链接已弃用。\n* [2023年4月26日] 通过结合[FocalNet-Huge](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet)主干网络，Focal-Stable-DINO在COCO test-dev上实现了**64.8 AP**，且*无需*任何测试时增强！更多详情请参阅我们的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13027)！\n* [2023年2月13日] FocalNet已被集成到[Keras](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras-io)中，请查看[教程](https:\u002F\u002Fkeras.io\u002Fexamples\u002Fvision\u002Ffocal_modulation_network\u002F)！\n* [2023年1月18日] 查看一份精选的论文列表，介绍基于现代卷积和调制的[超越注意力的网络](https:\u002F\u002Fgithub.com\u002FFocalNet\u002FNetworks-Beyond-Attention)！\n* [2023年1月1日] 研究人员表明，Focal-UNet在多个医学图像分割基准测试中超越了Swin-UNet。请查看他们的[代码](https:\u002F\u002Fgithub.com\u002Fgivkashi\u002FFocal-Unet)和[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09263)，祝大家新年快乐！\n* [2022年12月16日] :collision: 我们很高兴发布我们在Object365上预训练并在COCO上微调的[FocalNet-Large-DINO检查点](https:\u002F\u002Fgithub.com\u002FFocalNet\u002FFocalNet-DINO#model-zoos)，该检查点在COCO minival上无需tta即可达到**63.5** mAP！快来看看吧！\n* [2022年11月14日] 我们创建了一个新的仓库[FocalNet-DINO](https:\u002F\u002Fgithub.com\u002FFocalNet\u002FFocalNet-DINO)，用于存放使用DINO重现目标检测性能的代码。我们将在那里发布目标检测代码和检查点。敬请关注！\n* [2022年11月13日] :collision: 我们发布了在ImageNet-22K上预训练的[large、xlarge和huge模型](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet#imagenet-22k-pretrained)，其中包括我们在COCO目标检测排行榜上取得SoTA时所使用的模型！\n* [2022年11月2日] 我们撰写了一篇[博客文章](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fgroup\u002Fdeep-learning-group\u002Farticles\u002Ffocalnets-focusing-the-eyes-with-focal-modulation\u002F)，以通俗易懂的方式介绍了FocalNets背后的见解和技术，欢迎阅读！\n* [2022年10月31日] :collision: 我们基于强大的OD方法[DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FDINO)，在[COCO minival](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco-minival)上取得了新的SoTA，box mAP为64.2；在[COCO test-dev](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fobject-detection-on-coco)上则达到了\u003Cstrike>64.3\u003C\u002Fstrike> **64.4** box mAP！我们使用了巨大的模型规模（700M），击败了像SwinV2-G和BEIT-3这样更大的基于注意力的模型。请查看我们的[新版本](.\u002FFocalNet_NeurIPS2022_extension.pdf)并持续关注！\n* [2022年9月20日] 我们的FocalNet已被NeurIPS 2022接受！\n* [2022年4月2日] 在huggingface空间中创建了一个[gradio演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjw2yang\u002Ffocalnet-modulators)，用于可视化调制机制。快来体验吧！\n\n## 简介\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_5ca3123ae74b.png\" width=95% height=95% \nclass=\"center\">\n\u003C\u002Fp>\n\n我们提出了**FocalNets：焦点调制网络**，这是一种**无注意力**架构，在各种视觉基准测试中均表现出优于当前最先进自注意力（SA）方法的性能。如上图所示，SA是一个“先交互、后聚合”（FILA）的过程。而我们的焦点调制则反转了这一过程，采用“先聚合、后交互”（FALI）的方式。这种反转带来了多项优势：\n\n* **平移不变性**：它针对每个目标token进行操作，上下文围绕该token展开。\n* **显式的输入依赖性**：*调制器*通过聚合来自输入的短程和长程上下文来计算，然后应用于目标token。\n* **空间和通道特异性**：它首先按空间维度聚合上下文，再按通道维度聚合，最后进行逐元素调制。\n* **解耦的特征粒度**：查询token保留了最精细级别的个体信息，而其周围的较粗粒度上下文则被提取出来。两者通过调制操作连接在一起，但彼此解耦。\n* **易于实现**：我们可以以非常简单轻量的方式实现上下文聚合和交互操作。它不需要softmax、多头注意力、特征图滚动或展开等操作。\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_c0e0a1e19c46.png\" width=80% height=80% \nclass=\"center\">\n\u003C\u002Fp>\n\n在开始之前，让我们看看我们的FocalNets学到了如何感知图像以及在哪里进行调制吧！\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_094c496ebb57.png\" width=90% class=\"center\">\n\u003C\u002Fp>\n\n总之，FocalNets基于卷积层和线性层构建，但通过提出一种简单、通用、有效且高效的新型调制机制，实现了超越。我们在此推荐：\n\n**焦点调制或许正是我们进行视觉建模所需要的！**\n\n## 开始使用\n\n* 请按照[get_started_for_image_classification](.\u002Fclassification)中的说明，开始进行图像分类。\n* 请按照[get_started_for_object_detection](.\u002Fdetection)中的说明，开始进行目标检测。\n* 请按照[get_started_for_semantic_segmentation](.\u002Fsegmentation)中的说明，开始进行语义分割。\n\n## 基准测试\n\n### ImageNet-1K 上的图像分类\n\n* 与多尺度 Swin 和 Focal Transformer 的严格对比：\n\n| 模型 | 深度 | 尺寸 | 核大小 | 参数量 (M) | FLOPs (G) | 吞吐量 (imgs\u002Fs) | Top-1 | 下载 |\n| :----: | :---: | :---: | :---: | :---: | :--: | :---: | :---: |:---: | \n| FocalNet-T | [2,2,6,2] | 96 | [3,5] | 28.4 | 4.4 | 743 | 82.1 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_tiny_srf.pth)\u002F[配置文件](configs\u002Ffocalnet_tiny_srf.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_tiny_srf.txt)\n| FocalNet-T | [2,2,6,2] | 96 | [3,5,7] | 28.6 | 4.5 | 696 | 82.3 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_tiny_lrf.pth)\u002F[配置文件](configs\u002Ffocalnet_tiny_lrf.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_tiny_lrf.txt)\n| FocalNet-S | [2,2,18,2] | 96 | [3,5] | 49.9 | 8.6 | 434 | 83.4 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_small_srf.pth)\u002F[配置文件](configs\u002Ffocalnet_small_srf.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_small_srf.txt)\n| FocalNet-S | [2,2,18,2] | 96 | [3,5,7] | 50.3 | 8.7 | 406 | 83.5 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_small_lrf.pth)\u002F[配置文件](configs\u002Ffocalnet_small_lrf.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_small_lrf.txt)\n| FocalNet-B | [2,2,18,2] | 128 | [3,5] | 88.1 | 15.3 | 280 | 83.7 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_base_srf.pth)\u002F[配置文件](configs\u002Ffocalnet_base_srf.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_base_srf.txt)\n| FocalNet-B | [2,2,18,2] | 128 | [3,5,7] | 88.7 | 15.4 | 269 | 83.9 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_base_lrf.pth)\u002F[配置文件](configs\u002Ffocalnet_base_lrf.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_base_lrf.txt)\n\n* 与各向同性 ViT 模型的严格对比：\n\n| 模型 | 深度 | 尺寸 | 核大小 | 参数量 (M) | FLOPs (G) | 吞吐量 (imgs\u002Fs) | Top-1 | 下载 |\n| :----: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |:---: |\n| FocalNet-T | 12 | 192 | [3,5,7] | 5.9 | 1.1 | 2334 | 74.1 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_tiny_iso_16.pth)\u002F[配置文件](configs\u002Ffocalnet_tiny_iso.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_tiny_iso.txt)\n| FocalNet-S | 12 | 384 | [3,5,7] | 22.4 | 4.3 | 920 | 80.9 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_small_iso_16.pth)\u002F[配置文件](configs\u002Ffocalnet_small_iso.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_small_iso.txt)\n| FocalNet-B | 12 | 768 | [3,5,7] | 87.2 | 16.9 | 300 | 82.4 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_base_iso_16.pth)\u002F[配置文件](configs\u002Ffocalnet_base_iso.yaml)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Flog_focalnet_base_iso.txt)\n\n\n### ImageNet-22K 预训练\n\n| 模型 | 深度 | 尺寸 | 核大小 | 参数量 (M) | 下载 |\n| :----: | :---: | :---: | :---: | :---: | :--: | \n| FocalNet-L | [2,2,18,2] | 192 | [5,7,9] | 207 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_large_lrf_384.pth)\u002F[配置文件](configs\u002Ffocalnet_large_fl3.yaml)\n| FocalNet-L | [2,2,18,2] | 192 | [3,5,7,9] | 207 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_large_lrf_384_fl4.pth)\u002F[配置文件](configs\u002Ffocalnet_large_fl4.yaml)\n| FocalNet-XL | [2,2,18,2] | 256 | [5,7,9] | 366 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_xlarge_lrf_384.pth)\u002F[配置文件](configs\u002Ffocalnet_xlarge_fl3.yaml)\n| FocalNet-XL | [2,2,18,2] | 256 | [3,5,7,9] | 366 | [ckpt](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_xlarge_lrf_384_fl4.pth)\u002F[配置文件](configs\u002Ffocalnet_xlarge_fl4.yaml)\n| FocalNet-H | [2,2,18,2] | 352 | [3,5,7] | 687 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_huge_lrf_224.pth)\u002F[配置文件](configs\u002Ffocalnet_huge_fl3.yaml)\n| FocalNet-H | [2,2,18,2] | 352 | [3,5,7,9] | 689 | [ckpt](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fclassification\u002Ffocalnet_huge_lrf_224_fl4.pth)\u002F[配置文件](configs\u002Ffocalnet_huge_fl4.yaml)\n\n**注**: 我们对 ImageNet-22K 中的类别名称进行了重新排序，以便可以直接使用前 1000 个 logits 来在 ImageNet-1K 上进行评估。请注意，ImageNet-1K 中的第 851 类（标签=850）在 ImageNet-22K 中缺失。请参考此 [labelmap](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fblob\u002Fmain\u002Flabelmap_22k_reorder.txt)。更多讨论请参见此 [issue](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F27)。\n\n### COCO 数据集上的目标检测\n\n* [Mask R-CNN](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FHe_Mask_R-CNN_ICCV_2017_paper.pdf)\n\n| 主干网络   | 卷积核设置   | 学习率调度 | 参数量 (M) | 浮点运算次数 (G) | box mAP | mask mAP | 下载 \n| :---: | :---: | :---:  | :---:   | :---: | :---: | :---: | :---: |\n| FocalNet-T | [9,11]    | 1x      | 48.6 | 267 | 45.9 | 41.3 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_srf_maskrcnn_1x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_1x_coco_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_srf_maskrcnn_1x.json)\n| FocalNet-T | [9,11]    | 3x      | 48.6 | 267 | 47.6 | 42.6 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_srf_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_srf_maskrcnn_3x.json)\n| FocalNet-T | [9,11,13] | 1x      | 48.8 | 268 | 46.1 | 41.5 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_maskrcnn_1x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_1x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_maskrcnn_1x.json)\n| FocalNet-T | [9,11,13] | 3x      | 48.8 | 268 | 48.0 | 42.9 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_maskrcnn_3x.json)\n| FocalNet-S | [9,11]    | 1x      | 70.8  | 356 | 48.0 | 42.7 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_srf_maskrcnn_1x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_1x_coco_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_srf_maskrcnn_1x.json)\n| FocalNet-S | [9,11]    | 3x      | 70.8  | 356 | 48.9 | 43.6 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_srf_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_srf_maskrcnn_3x.json)\n| FocalNet-S | [9,11,13] | 1x      | 72.3  | 365 | 48.3 | 43.1 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_lrf_maskrcnn_1x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_1x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_lrf_maskrcnn_1x.json)\n| FocalNet-S | [9,11,13] | 3x      | 72.3  | 365 | 49.3 | 43.8 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_small_lrf_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_small_lrf_maskrcnn_3x.json)\n| FocalNet-B | [9,11]    | 1x      | 109.4 | 496 | 48.8 | 43.3 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_srf_maskrcnn_1x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_1x_coco_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_srf_maskrcnn_1x.json)\n| FocalNet-B | [9,11]    | 3x      | 109.4 | 496 | 49.6 | 44.1 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_srf_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_3x_coco_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_srf_maskrcnn_3x.json)\n| FocalNet-B | [9,11,13] | 1x      | 111.4 | 507 | 49.0 | 43.5 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_lrf_maskrcnn_1x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_1x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_lrf_maskrcnn_1x.json)\n| FocalNet-B | [9,11,13] | 3x      | 111.4 | 507 | 49.8 | 44.1 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_base_lrf_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fmask_rcnn_focalnet_base_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_base_lrf_maskrcnn_3x.json)\n\n* 其他检测方法\n\n| 主干网络 | 卷积核设置 | 方法 | 学习率调度 | 参数量 (M) | 浮点运算次数 (G) | box mAP | 下载 \n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| FocalNet-T | [11,9,9,7] | [Cascade Mask R-CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00726) | 3x | 87.1  | 751 | 51.5 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_cascade_maskrcnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fcascade_mask_rcnn_focalnet_tiny_patch4_mstrain_480-800_adamw_3x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_cascade_maskrcnn_3x.json)\n| FocalNet-T | [11,9,9,7] | [ATSS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.02424.pdf)           | 3x | 37.2  | 220 | 49.6 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_atss_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fatss_focalnet_tiny_patch4_fpn_3x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_atss_3x.json)\n| FocalNet-T | [11,9,9,7] | [Sparse R-CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.12450.pdf)   | 3x | 111.2 | 178 | 49.9 | [检查点](https:\u002F\u002Fprojects4jw.blob core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Ffocalnet_tiny_lrf_sparsercnn_3x.pth)\u002F[配置文件](detection\u002Fconfigs\u002Ffocalnet\u002Fsparse_rcnn_focalnet_tiny_fpn_300_proposals_crop_mstrain_480-800_3x_coco_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob core.windows.net\u002Ffocalnet\u002Frelease\u002Fdetection\u002Flog_focalnet_tiny_lrf_sparsercnn_3x.json)\n\n### ADE20K 数据集上的语义分割\n\n* 分辨率 512x512，迭代次数 16万次\n\n| 主干网络 | 核心数 | 方法 | 参数量 (M) | FLOPs (G) | mIoU | mIoU (多尺度) | 下载 |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| FocalNet-T | [9,11] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 61  | 944 | 46.5 | 47.2 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_tiny_srf_upernet_160k.pth)\u002F[配置文件](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_tiny_patch4_512x512_160k_ade20k_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_tiny_srf_upernet_160k.json)\n| FocalNet-T | [9,11,13] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf)  | 61  | 949 | 46.8 | 47.8 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_tiny_lrf_upernet_160k.pth)\u002F[配置文件](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_tiny_patch4_512x512_160k_ade20k_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_tiny_lrf_upernet_160k.json)\n| FocalNet-S | [9,11] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 83 | 1035 | 49.3 | 50.1 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_small_srf_upernet_160k.pth)\u002F[配置文件](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_small_patch4_512x512_160k_ade20k_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_small_srf_upernet_160k.json)\n| FocalNet-S | [9,11,13] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf)  | 84 | 1044 | 49.1 | 50.1 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_small_lrf_upernet_160k.pth)\u002F[配置文件](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_small_patch4_512x512_160k_ade20k_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_small_lrf_upernet_160k.json)\n| FocalNet-B | [9,11] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 124 | 1180 | 50.2 | 51.1 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_base_srf_upernet_160k.pth)\u002F[配置文件](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_base_patch4_512x512_160k_ade20k_srf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_base_srf_upernet_160k.json)\n| FocalNet-B | [9,11,13] | [UPerNet](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.10221.pdf) | 126 | 1192 | 50.5 | 51.4 | [检查点](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Ffocalnet_base_lrf_upernet_160k.pth)\u002F[配置文件](segmentation\u002Fconfigs\u002Ffocalnet\u002Fupernet_focalnet_base_patch4_512x512_160k_ade20k_lrf.py)\u002F[日志](https:\u002F\u002Fprojects4jw.blob.core.windows.net\u002Ffocalnet\u002Frelease\u002Fsegmentation\u002Flog_focalnet_base_lrf_upernet_160k.json)\n\n## 可视化\n\n我们的 FocalNets 包含三个步骤：\n1. 使用深度可分离卷积进行上下文建模；\n2. 通过门控机制进行多尺度聚合；\n3. 基于上下文聚合与投影生成调制器。\n\n我们逐一展示这些步骤的可视化效果。\n\n* **FocalNets 中学习到的深度可分离卷积核**：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_eb2f7ed01c31.png\" width=70% height=70% class=\"center\">\n\u003C\u002Fp>\n\n黄色表示较高的值。显然，FocalNets 在早期阶段学习聚集更多的局部上下文，而在后期阶段则更多地关注全局上下文。\n\n* **FocalNets 最后一层针对不同输入图像的门控图**：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_5d8b7925d815.png\" width=70% height=70% class=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_fa884c77751d.png\" width=70% height=70% class=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_121d7d69c2b8.png\" width=70% height=70% class=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_5d11d326d217.png\" width=70% height=70% class=\"center\">\n\u003C\u002Fp>\n\n从左至右依次为输入图像、第1、2、3层焦点级别的门控图以及全局上下文图。很明显，我们的模型已经学会根据不同位置的视觉内容来决定在哪里聚集上下文信息。\n\n* **FocalNets 中针对不同输入图像学习到的调制器**：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_readme_f127495ed1fc.png\" width=70% height=70% class=\"center\">\n\u003C\u002Fp>\n\n由我们的模型生成的调制器能够自动聚焦于前景区域。\n\n如需自行进行可视化，请参阅 [可视化笔记本](tools\u002Fvisualize.ipynb)。\n\n## 引用\n\n如果您觉得本仓库对您的项目有所帮助，请考虑使用以下 BibTeX 格式引用：\n\n    @misc{yang2022focal,\n          title={Focal Modulation Networks}, \n          author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Jianfeng Gao},\n          journal={Advances in Neural Information Processing Systems (NeurIPS)},\n          year={2022}\n    }\n\n## 致谢\n\n我们的代码库基于 [Swin Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer) 和 [Focal Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocal-Transformer) 构建。为了达到当前最优的目标检测性能，我们高度依赖最先进的方法 [DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FDINO) 以及作者们的宝贵建议。感谢作者们精心组织的代码！\n\n## 贡献\n\n本项目欢迎各类贡献和建议。大多数贡献都需要您同意贡献者许可协议（CLA），以声明您有权并确实授予我们使用您贡献的权利。详情请访问 https:\u002F\u002Fcla.opensource.microsoft.com。\n\n当您提交拉取请求时，CLA 机器人会自动判断您是否需要提供 CLA，并相应地标记 PR（例如状态检查、评论）。只需按照机器人提供的指示操作即可。对于所有使用我们 CLA 的仓库，您只需完成一次此流程。\n\n本项目已采用 [微软开源行为准则](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002F)。更多信息请参阅 [行为准则常见问题解答](https:\u002F\u002Fopensource.microsoft.com\u002Fcodeofconduct\u002Ffaq\u002F) 或发送邮件至 [opencode@microsoft.com](mailto:opencode@microsoft.com) 提出任何其他问题或意见。\n\n## 商标\n\n本项目可能包含项目、产品或服务相关的商标或标志。未经授权使用微软商标或标志须遵守并遵循 [微软商标与品牌指南](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Flegal\u002Fintellectualproperty\u002Ftrademarks\u002Fusage\u002Fgeneral)。在本项目的修改版本中使用微软商标或标志不得造成混淆或暗示微软的赞助。任何第三方商标或标志的使用均受其各自政策的约束。","# FocalNet 快速上手指南\n\nFocalNet (Focal Modulation Networks) 是一种无需注意力机制（Attention-free）的视觉架构，通过“先聚合后交互”的焦点调制机制，在图像分类、目标检测和语义分割等任务中实现了超越主流自注意力模型的性能。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 Windows (WSL2)\n*   **Python**: 3.7 或更高版本\n*   **GPU**: 支持 CUDA 的 NVIDIA GPU (推荐显存 16GB+ 以运行大型模型)\n*   **核心依赖**:\n    *   PyTorch >= 1.8\n    *   torchvision\n    *   timm\n    *   mmcv \u002F mmdetection (若用于目标检测)\n    *   mmsegmentation (若用于语义分割)\n\n> **国内加速建议**：建议使用清华源或阿里源安装 Python 依赖，以提升下载速度。\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple torch torchvision timm\n> ```\n\n## 安装步骤\n\n### 1. 克隆代码库\n从 GitHub 获取官方源代码：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet.git\ncd FocalNet\n```\n\n### 2. 安装依赖\n项目根目录下通常包含 `requirements.txt`，请使用以下命令安装所需包（推荐使用国内镜像）：\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n如果需要进行目标检测或语义分割任务，请分别参考 `detection\u002F` 和 `segmentation\u002F` 目录下的具体环境配置说明（通常涉及安装 MMDetection 或 MMSegmentation）。\n\n## 基本使用\n\n以下以最常用的 **ImageNet-1K 图像分类** 为例，展示如何加载预训练模型并进行推理。\n\n### 1. 下载预训练模型\n从 Release 页面下载 Tiny 版本的预训练权重（示例）：\n```bash\nwget https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Freleases\u002Fdownload\u002Fv1.0.0\u002Ffocalnet_tiny_srf.pth\n```\n\n### 2. Python 推理示例\n创建一个 `infer.py` 文件，编写以下代码加载模型并预测图片：\n\n```python\nimport torch\nfrom PIL import Image\nfrom torchvision import transforms\nfrom models import focalnet\n\n# 1. 配置设备\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n# 2. 构建模型 (以 FocalNet-Tiny SRF 为例)\n# depth=[2,2,6,2], dim=96, kernels=[3,5]\nmodel = focalnet.FocalNet(depths=[2, 2, 6, 2], embed_dim=96, sr_ratio=[3, 5])\n\n# 3. 加载预训练权重\ncheckpoint = torch.load('focalnet_tiny_srf.pth', map_location=device)\nmodel.load_state_dict(checkpoint['model'], strict=True)\nmodel.to(device)\nmodel.eval()\n\n# 4. 图像预处理\ntransform = transforms.Compose([\n    transforms.Resize(256),\n    transforms.CenterCrop(224),\n    transforms.ToTensor(),\n    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n])\n\n# 5. 推理\nimage = Image.open(\"your_image.jpg\").convert('RGB')\ninput_tensor = transform(image).unsqueeze(0).to(device)\n\nwith torch.no_grad():\n    output = model(input_tensor)\n    prediction = torch.argmax(output, dim=1)\n\nprint(f\"Predicted class index: {prediction.item()}\")\n```\n\n### 3. 其他任务入口\n*   **目标检测**: 请参考 `detection\u002F` 目录，基于 MMDetection 框架运行训练和测试脚本。\n*   **语义分割**: 请参考 `segmentation\u002F` 目录，基于 MMSegmentation 框架运行相关脚本。\n*   **可视化演示**: 可访问 HuggingFace Space 查看焦点调制机制的可视化效果。","某医疗影像实验室的研究团队正致力于开发一套自动化的肺部 CT 扫描病灶分割系统，以辅助医生快速定位早期肿瘤。\n\n### 没有 FocalNet 时\n- **微小病灶漏检率高**：传统卷积神经网络或早期 Transformer 模型难以兼顾局部细节与全局上下文，导致直径小于 5mm 的微小结节经常被忽略。\n- **计算资源消耗巨大**：为了提升精度强行增加网络深度或引入复杂的注意力机制，使得推理速度缓慢，无法满足临床实时诊断的需求。\n- **边界分割模糊**：在处理肿瘤与正常组织交界模糊的区域时，模型往往产生锯齿状边缘，严重影响后续体积测量的准确性。\n- **训练收敛困难**：在数据量有限的医疗数据集上，复杂模型极易过拟合，需要耗费大量时间进行繁琐的数据增强和超参数调优。\n\n### 使用 FocalNet 后\n- **多尺度特征精准捕捉**：FocalNet 独特的焦点调制机制能自适应地聚焦不同尺度的信息，显著提升了微小结节的检出率，几乎消除了漏检现象。\n- **效率与精度完美平衡**：得益于高效的架构设计，FocalNet 在保持 SOTA（最先进）精度的同时，大幅降低了计算开销，实现了秒级影像分析。\n- **病灶边缘清晰锐利**：模型对长距离依赖的建模能力增强，使得肿瘤边界的分割结果更加平滑自然，极大提高了定量分析的可靠性。\n- **小样本表现稳健**：凭借强大的特征提取能力，FocalNet 在有限标注数据下也能快速收敛且泛化能力强，减少了团队在数据预处理上的投入。\n\nFocalNet 通过创新的焦点调制技术，成功解决了医疗影像中“看得清”与“算得快”难以兼得的痛点，让 AI 辅助诊断真正具备了临床落地价值。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmicrosoft_FocalNet_5ca3123a.png","microsoft","Microsoft","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmicrosoft_4900709c.png","Open source projects and samples from Microsoft",null,"opensource@microsoft.com","OpenAtMicrosoft","https:\u002F\u002Fopensource.microsoft.com","https:\u002F\u002Fgithub.com\u002Fmicrosoft",[83,87,91],{"name":84,"color":85,"percentage":86},"Python","#3572A5",97.7,{"name":88,"color":89,"percentage":90},"Jupyter Notebook","#DA5B0B",2.2,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0.1,751,59,"2026-04-13T02:41:12","MIT","未说明","需要 NVIDIA GPU（基于 PyTorch 实现及大规模模型训练\u002F推理需求，如 FocalNet-Huge 达 700M 参数），具体显存和 CUDA 版本未在片段中明确，通常建议 16GB+ 显存及 CUDA 11.x+",{"notes":102,"python":99,"dependencies":103},"该工具是 FocalNets 的官方 PyTorch 实现。README 片段主要介绍了模型架构、新闻更新及 ImageNet 上的基准测试结果，未包含具体的安装命令、requirements.txt 内容或详细的硬件配置清单。不同规模的模型（Tiny 到 Huge）对资源需求差异巨大，大型模型（如 FocalNet-Huge）需要高性能 GPU 支持。",[104],"Pytorch",[15],"2026-03-27T02:49:30.150509","2026-04-19T03:05:02.593657",[109,114,119,124,129,133,138,143],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},41280,"加载在 Object365 上预训练的 FocalNet-DINO 模型时报错 'Unknown backbone' 或类别不匹配，如何解决？","这是因为配置文件中的类别数量设置不正确。如果要加载在 Object365 上预训练的模型进行评估，必须确保模型配置中的 `num_classes` 设置为 366（而不是默认的 91）。此外，可能还需要将 `dn_labelbook_size` 参数调整为 400。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F25",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},41281,"在哪里可以找到用于可视化注意力图（Modulation Map）的预训练权重文件（如 focalnet_base_lrf.pth）？","用于可视化的特定权重文件通常包含在主要的预训练权重包中，或者可以通过上述提到的 Hugging Face 仓库下载对应的骨干网络权重。用户需下载相应的 `.pth` 文件并在 `visualization.ipynb` 中指定正确的路径。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F22",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},41278,"如何获取在 COCO 检测任务上达到 SOTA 的 DINO 版 FocalNet 模型代码和权重？","目标检测相关的代码和模型已发布在独立的仓库中，请访问：https:\u002F\u002Fgithub.com\u002FFocalNet\u002FFocalNet-DINO。该仓库包含了基于 DINO 的 FocalNet 实现及相关预训练权重。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F9",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},41279,"下载预训练权重时遇到 'PublicAccessNotPermitted' 错误怎么办？","原始存储链接的公共访问权限已被禁用。维护者已将权重迁移至 Hugging Face。请前往 Hugging Face 下载相关模型（例如分类骨干网络）。对于 DINO 版本，具体下载地址为：\n1. Object365 预训练：https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Ffocalnet-large-fl4-dino-o365\u002Fresolve\u002Fmain\u002Ffocalnet_large_fl4_pretrained_on_o365.pth\n2. COCO 微调：https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Ffocalnet-large-fl4-dino-o365-cocoft\u002Fresolve\u002Fmain\u002Ffocalnet_large_fl4_o365_finetuned_on_coco.pth","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F43",{"id":130,"question_zh":131,"answer_zh":132,"source_url":118},41282,"如何从代码中提取每个像素的注意力分数（Attention\u002FFocus Scores）而不仅仅是可视化图像？","注意力分数可以通过修改模型前向传播代码来获取。具体而言，关注计算输出特征的部分，通常在 `x_out = q * self.modulator` 这一行。`self.modulator` 或相关变量中包含了调制系数（即注意力权重），您可以直接提取这些数值作为特征输入到其他模型中，而不仅限于生成可视化热力图。",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},41283,"找不到 tools\u002Ftrain.py 文件，或者想进行分割\u002F检测任务训练该怎么办？","为了方便复现实验，官方已将相关文件整合到 MMDetection 和 MMSegmentation 的适配仓库中。请不要直接在主仓库寻找训练脚本，而是克隆以下专用仓库：\n- 目标检测：https:\u002F\u002Fgithub.com\u002FFocalNet\u002FFocalNet-MMDetection\n- 语义分割：需要将 FocalNet 代码放入 mmsegmentation 框架中或使用对应的适配仓库。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F11",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},41284,"在使用 AMP O1 混合精度训练 FocalNet 时遇到梯度溢出（Gradient Overflow）导致 Loss 变为 NaN，如何解决？","这是一个已知问题，可能与特定的硬件配置或 Dropout 设置有关。虽然维护者曾尝试修复，但如果问题依旧存在，建议检查是否使用了与官方完全一致的配置文件。如果使用的是 `focalnet_small_lrf` 且在 Swin Transformer 代码库中运行效果不佳，请确保严格遵循官方提供的配置参数，并留意官方仓库是否有最新的代码更新以解决此溢出问题。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F2",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},41285,"将 FocalNet 模型转换为 ONNX 格式时出现形状不匹配（shape mismatch）错误，原因是什么？","这通常是由于 MMDetection 版本不一致导致的。建议检查您使用的 `mmdet` 版本是否与模型权重训练时的版本匹配。如果版本不同，可能会导致 ROI Head 等部分的输出形状预期不一致。建议通过设置断点调试，确认 `bbox_pred` 的 reshape 操作前的数据维度是否符合预期。","https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFocalNet\u002Fissues\u002F26",[149,153],{"id":150,"version":151,"summary_zh":77,"released_at":152},333236,"v1.0.1","2023-09-09T23:03:24",{"id":154,"version":155,"summary_zh":156,"released_at":157},333237,"v1.0.0","用于图像分类的 FocalNet 检查点","2023-06-01T17:21:55"]