[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-sail-sg--poolformer":3,"tool-sail-sg--poolformer":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":106,"github_topics":107,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":151},1352,"sail-sg\u002Fpoolformer","poolformer","PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)","PoolFormer 是一个轻量级的视觉 Transformer 实现，核心思想是：真正让 Transformer 在图像任务里表现优异的，其实是它的整体框架（MetaFormer），而不是复杂的注意力机制。为了验证这一点，作者把最花哨的注意力模块替换成最简单的“池化”操作，结果得到的 PoolFormer 在 ImageNet 上依旧能打败 DeiT、ResMLP 等模型，而且参数量更小、推理更快。\n\n它解决了“模型必须做复杂注意力才能好”的误区，为研究者和工程师提供了一条“极简即有效”的新思路。代码基于 PyTorch，提供图像分类、目标检测、实例分割、语义分割的完整配置与预训练权重，也支持 Grad-CAM 可视化与 MACs 计算。\n\n如果你正在做视觉算法研究、想快速验证新想法，或者希望在端侧部署高效模型，PoolFormer 会是不错的起点。","# PoolFormer: [MetaFormer Is Actually What You Need for Vision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418) (CVPR 2022 Oral)\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418\" alt=\"arXiv\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2111.11418-b31b1b.svg?style=flat\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002Fpoolformer\" alt=\"Hugging Face Spaces\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1n1UK4ihfiySTWTDuusAhm_6CLm1h4bTj?usp=sharing\" alt=\"Colab\">\n    \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n\n---\n:fire: :fire: Our follow-up work \"[MetaFormer Baselines for Vision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13452)\" (code: [metaformer](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fmetaformer)) introduces more MetaFormer baselines including\n+ **IdentityFormer** with token mixer of identity mapping surprisingly achieve >80% accuracy.\n+ **RandFormer** achieves >81% accuracy by random token mixing, demonstrating MetaForemr works well with arbitrary token mixers.\n+ **ConvFormer** with token mixer of separable convolution significantly outperforms ConvNeXt by large margin.\n+ **CAFormer** with token mixers of separable convolutions and vanilla self-attention sets new record on ImageNet-1K.\n\n---\n\n\nThis is a PyTorch implementation of **PoolFormer** proposed by our paper \"[MetaFormer Is Actually What You Need for Vision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418)\" (CVPR 2022 Oral).\n\n\n**Note**: Instead of designing complicated token mixer to achieve SOTA performance, the target of this work is to demonstrate the competence of Transformer models largely stem from the general architecture MetaFormer. Pooling\u002FPoolFormer are just the tools to support our claim. \n\n![MetaFormer](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_readme_5a4da7e1b22e.png)\nFigure 1: **MetaFormer and performance of MetaFormer-based models on ImageNet-1K validation set.** \nWe argue that the competence of Transformer\u002FMLP-like models primarily stem from the general architecture MetaFormer instead of the equipped specific token mixers.\nTo demonstrate this, we exploit an embarrassingly simple non-parametric operator, pooling, to conduct extremely basic token mixing. \nSurprisingly, the resulted model PoolFormer consistently outperforms the DeiT and ResMLP as shown in (b), which well supports that MetaFormer is actually what we need to achieve competitive performance. RSB-ResNet in (b) means the results are from “ResNet Strikes Back” where ResNet is trained with improved training procedure for 300 epochs.\n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_readme_887cb0a70d24.png\" alt=\"PoolFormer\"\u002F>\n\u003C\u002Fp>\n\nFigure 2: (a) **The overall framework of PoolFormer.** (b) **The architecture of PoolFormer block.** Compared with Transformer block, it replaces attention with an extremely simple non-parametric operator, pooling, to conduct only basic token mixing.\n\n## Bibtex\n```\n@inproceedings{yu2022metaformer,\n  title={Metaformer is actually what you need for vision},\n  author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},\n  booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n  pages={10819--10829},\n  year={2022}\n}\n```\n\n**Detection and instance segmentation on COCO** configs and trained models are [here](detection\u002F).\n\n**Semantic segmentation on ADE20K** configs and trained models are [here](segmentation\u002F).\n\nThe code to visualize Grad-CAM activation maps of PoolFomer, DeiT, ResMLP, ResNet and Swin are [here](misc\u002Fcam_image.py).\n\nThe code to measure MACs are [here](misc\u002Fmac_count_with_fvcore.py).\n\n## Image Classification\n### 1. Requirements\n\ntorch>=1.7.0; torchvision>=0.8.0; pyyaml; [apex-amp](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fapex) (if you want to use fp16); [timm](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models) (`pip install git+https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models.git@9d6aad44f8fd32e89e5cca503efe3ada5071cc2a`)\n\ndata prepare: ImageNet with the following folder structure, you can extract ImageNet by this [script](https:\u002F\u002Fgist.github.com\u002FBIGBALLON\u002F8a71d225eff18d88e469e6ea9b39cef4).\n\n```\n│imagenet\u002F\n├──train\u002F\n│  ├── n01440764\n│  │   ├── n01440764_10026.JPEG\n│  │   ├── n01440764_10027.JPEG\n│  │   ├── ......\n│  ├── ......\n├──val\u002F\n│  ├── n01440764\n│  │   ├── ILSVRC2012_val_00000293.JPEG\n│  │   ├── ILSVRC2012_val_00002138.JPEG\n│  │   ├── ......\n│  ├── ......\n```\n\n\n\n### 2. PoolFormer Models\n\n| Model    |  #Params | Image resolution | #MACs* | Top1 Acc| Download | \n| :---     |   :---:    |  :---: |  :---: |  :---:  |  :---:  |\n| poolformer_s12  |    12M     |   224  |  1.8G |  77.2  | [here](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_s12.pth.tar) |\n| poolformer_s24 |   21M     |   224 | 3.4G | 80.3  | [here](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_s24.pth.tar) |\n| poolformer_s36  |   31M     |   224 | 5.0G | 81.4  | [here](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_s36.pth.tar) |\n| poolformer_m36 |   56M     |   224 | 8.8G | 82.1  | [here](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_m36.pth.tar) |\n| poolformer_m48  |   73M     |   224 | 11.6G | 82.5  | [here](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_m48.pth.tar) | \n\n\nAll the pretrained models can also be downloaded by [BaiDu Yun](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1HSaJtxgCkUlawurQLq87wQ) (password: esac). * For convenient comparison with future models, we update the numbers of MACs counted by [fvcore](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffvcore) library ([example code](misc\u002Fmac_count_with_fvcore.py)) which are also reported in the [new arXiv version](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418).\n\n\n#### Web Demo\n\nIntegrated into [Huggingface Spaces 🤗](https:\u002F\u002Fhuggingface.co\u002Fspaces) using [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio). Try out the Web Demo: [![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002Fpoolformer)\n\n\n\n#### Usage\nWe also provide a Colab notebook which run the steps to perform inference with poolformer: [![Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1n1UK4ihfiySTWTDuusAhm_6CLm1h4bTj?usp=sharing)\n\n\n### 3. Validation\n\nTo evaluate our PoolFormer models, run:\n\n```bash\nMODEL=poolformer_s12 # poolformer_{s12, s24, s36, m36, m48}\npython3 validate.py \u002Fpath\u002Fto\u002Fimagenet  --model $MODEL -b 128 \\\n  --pretrained # or --checkpoint \u002Fpath\u002Fto\u002Fcheckpoint \n```\n\n\n\n### 4. Train\nWe show how to train PoolFormers on 8 GPUs. The relation between learning rate and batch size is lr=bs\u002F1024*1e-3.\nFor convenience, assuming the batch size is 1024, then the learning rate is set as 1e-3 (for batch size of 1024, setting the learning rate as 2e-3 sometimes sees better performance). \n\n\n```bash\nMODEL=poolformer_s12 # poolformer_{s12, s24, s36, m36, m48}\nDROP_PATH=0.1 # drop path rates [0.1, 0.1, 0.2, 0.3, 0.4] responding to model [s12, s24, s36, m36, m48]\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 .\u002Fdistributed_train.sh 8 \u002Fpath\u002Fto\u002Fimagenet \\\n  --model $MODEL -b 128 --lr 1e-3 --drop-path $DROP_PATH --apex-amp\n```\n\n### 5. Visualization\n![gradcam](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_readme_1db80936e669.png)\n\nThe code to visualize Grad-CAM activation maps of PoolFomer, DeiT, ResMLP, ResNet and Swin are [here](misc\u002Fcam_image.py).\n\n\n## Acknowledgment\nOur implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.\n\n[pytorch-image-models](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models), [mmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection), [mmsegmentation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation).\n\n\nBesides, Weihao Yu would like to thank TPU Research Cloud (TRC) program for the support of partial computational resources.\n","# PoolFormer：【MetaFormer 实际上正是您在视觉任务中所需要的】（https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418）（CVPR 2022 论文oral）\n\n\u003Cp align=\"center\">\n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418\" alt=\"arXiv\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2111.11418-b31b1b.svg?style=flat\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002Fpoolformer\" alt=\"Hugging Face Spaces\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%A4%97%20Hugging%20Face-Spaces-blue\" \u002F>\u003C\u002Fa>\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1n1UK4ihfiySTWTDuusAhm_6CLm1h4bTj?usp=sharing\" alt=\"Colab\">\n    \u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" \u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n\n---\n:fire: :fire: 我们的研究后续工作“[MetaFormer 基线模型与视觉任务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.13452)”（代码：[metaformer](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fmetaformer)）引入了更多 MetaFormer 基线模型，其中包括：\n+ **IdentityFormer** 通过采用身份映射的 token 混合技术，其准确率竟然突破了 80%。\n+ **RandFormer** 通过随机 token 混合，准确率高达 81%，充分证明了 MetaFormer 能够很好地应对任意 token 混合方式。\n+ **ConvFormer** 采用可分离卷积的 token 混合机制，其性能显著优于 ConvNeXt，且优势十分明显。\n+ **CAFormer** 结合可分离卷积的 token 混合机制与传统的自注意力机制，在 ImageNet-1K 数据集上刷新了新的纪录。\n\n---\n\n\n这是基于 PyTorch 的 **PoolFormer** 实现版本，由我们论文“[MetaFormer 实际上正是您在视觉任务中所需要的](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418)”（CVPR 2022 论文 oral）提出。\n\n**注意**：相较于设计复杂的 token 混合机制以实现顶尖性能，本研究的目标在于证明 Transformer 模型的卓越能力，很大程度上源自 MetaFormer 这一通用架构。Pooling\u002FPoolFormer 只是支撑我们这一论点的工具而已。\n\n![MetaFormer](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_readme_5a4da7e1b22e.png)\n图 1：**MetaFormer 与基于 MetaFormer 的模型在 ImageNet-1K 验证集上的性能表现。** 我们认为，Transformer 和 MLP 类型模型的卓越性能，主要源于 MetaFormer 这一通用架构，而非特定的 token 混合机制。\n为了佐证这一点，我们利用一个极其简单的非参数化操作——池化——进行基础的 token 混合。令人惊讶的是，最终得到的模型 PoolFormer 在性能上始终超越 DeiT 和 ResMLP，如图 (b) 所示，这有力地支持了这样一个观点：MetaFormer 实际上正是我们实现竞争力级性能所必需的工具。图 (b) 中的 RSB-ResNet 指的是“ResNet 再次发力”的结果，该模型采用改进的训练策略，经过 300 个 epoch 的训练而取得优异表现。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_readme_887cb0a70d24.png\" alt=\"PoolFormer\"\u002F>\n\u003C\u002Fp>\n\n图 2：(a) **PoolFormer 的整体框架。** (b) **PoolFormer 模块的架构。** 与 Transformer 模块相比，它用一种极为简单的非参数化操作——池化——取代了传统的注意力机制，仅进行基础的 token 混合。\n\n## BibTeX\n```\n@inproceedings{yu2022metaformer,\n  title={Metaformer 是您在视觉任务中所需要的},\n  author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},\n  booktitle={IEEE\u002FCVF 计算机视觉与模式识别会议论文集},\n  pages={10819--10829},\n  year={2022}\n}\n```\n\n**COCO 检测与实例分割配置及训练模型** [请在此处](detection\u002F) 查看。\n\n**ADE20K 语义分割配置及训练模型** [请在此处](segmentation\u002F) 查看。\n\n用于可视化 PoolFormer、DeiT、ResMLP、ResNet 以及 Swin 等模型 Grad-CAM 激活图的代码 [请在此处](misc\u002Fcam_image.py) 查看。\n\n用于测量 MACs 的代码 [请在此处](misc\u002Fmac_count_with_fvcore.py) 查看。\n\n## 图像分类\n### 1. 需求\n\ntorch >= 1.7.0；torchvision >= 0.8.0；pyyaml；[apex-amp](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fapex)（若需使用 FP16）；[timm](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models)（`pip install git+https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models.git@9d6aad44f8fd32e89e5cca503efe3ada5071cc2a`）\n\n数据准备：ImageNet，文件夹结构如下所示。您可以使用以下脚本提取 ImageNet 数据集：[https:\u002F\u002Fgist.github.com\u002FBIGBALLON\u002F8a71d225eff18d88e469e6ea9b39cef4](https:\u002F\u002Fgist.github.com\u002FBIGBALLON\u002F8a71d225eff18d88e469e6ea9b39cef4)。\n\n```\n│imagenet\u002F\n├──train\u002F\n│  ├── n01440764\n│  │   ├── n01440764_10026.JPEG\n│  │   ├── n01440764_10027.JPEG\n│  │   ├── ......\n│  ├── ......\n├──val\u002F\n│  ├── n01440764\n│  │   ├── ILSVRC2012_val_00000293.JPEG\n│  │   ├── ILSVRC2012_val_00002138.JPEG\n│  │   ├── ......\n│  ├── ......\n```\n\n\n\n### 2. PoolFormer 模型\n\n| 模型    | 参数量 | 图像分辨率 | MACs* | Top1 准确率 | 下载链接 |\n| :---     |   :---:    |  :---: |  :---: |  :---:  |  :---:  |\n| poolformer_s12  |    12M     |   224  |  1.8G |  77.2  | [此处](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_s12.pth.tar) |\n| poolformer_s24 |   21M     |   224 | 3.4G | 80.3  | [此处](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_s24.pth.tar) |\n| poolformer_s36  |   31M     |   224 | 5.0G | 81.4  | [此处](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_s36.pth.tar) |\n| poolformer_m36 |   56M     |   224 | 8.8G | 82.1  | [此处](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_m36.pth.tar) |\n| poolformer_m48  |   73M     |   224 | 11.6G | 82.5  | [此处](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Freleases\u002Fdownload\u002Fv1.0\u002Fpoolformer_m48.pth.tar) | \n\n\n所有预训练模型均可通过 [百度云](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1HSaJtxgCkUlawurQLq87wQ)（密码：esac）下载。* 为方便与未来模型进行对比，我们更新了由 [fvcore](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffvcore) 库统计的 MACs 数量（[示例代码](misc\u002Fmac_count_with_fvcore.py)），这些数值同样在最新版 arXiv 论文中有所报告（[链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.11418)）。\n\n#### 网页演示\n\n通过 [Gradio](https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio) 集成至 [Hugging Face Spaces 🤗](https:\u002F\u002Fhuggingface.co\u002Fspaces)。试用网页演示：[![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002Fpoolformer)\n\n#### 使用方法\n我们还提供了一个 Colab 笔记本，可运行使用 PoolFormer 进行推理的步骤：[![Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1n1UK4ihfiySTWTDuusAhm_6CLm1h4bTj?usp=sharing)\n\n### 3. 验证\n\n要评估我们的 PoolFormer 模型，请运行：\n\n```bash\nMODEL=poolformer_s12 # poolformer_{s12, s24, s36, m36, m48}\npython3 validate.py \u002Fpath\u002Fto\u002Fimagenet  --model $MODEL -b 128 \\\n  --pretrained # 或 --checkpoint \u002Fpath\u002Fto\u002Fcheckpoint \n```\n\n### 4. 训练\n我们展示了如何在8块GPU上训练PoolFormer。学习率与批量大小的关系为：lr = bs \u002F 1024 * 1e-3。\n为方便起见，假设批量大小为1024，则学习率设为1e-3（对于批量大小为1024的情况，有时将学习率设置为2e-3反而能获得更好的性能）。\n\n```bash\nMODEL=poolformer_s12 # poolformer_{s12, s24, s36, m36, m48}\nDROP_PATH=0.1 # 每次模型的Drop Path率范围 [0.1, 0.1, 0.2, 0.3, 0.4]，对应模型分别为 s12、s24、s36、m36、m48\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 .\u002Fdistributed_train.sh 8 \u002Fpath\u002Fto\u002Fimagenet \\\n  --model $MODEL -b 128 --lr 1e-3 --drop-path $DROP_PATH --apex-amp\n```\n\n### 5. 可视化\n![gradcam](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_readme_1db80936e669.png)\n\nPoolFormer、DeiT、ResMLP、ResNet以及Swin的Grad-CAM激活图可视化代码[可在此处找到](misc\u002Fcam_image.py)。\n\n## 致谢\n我们的实现主要基于以下代码库。我们衷心感谢各位作者的杰出成果。\n\n[pytorch-image-models](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fpytorch-image-models)、[mmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection)、[mmsegmentation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmsegmentation)。\n\n此外，余伟浩还特别感谢TPU研究云（TRC）项目对部分计算资源的支持。","# PoolFormer 快速上手指南\n\n## 环境准备\n- **系统**：Linux \u002F macOS \u002F Windows（推荐 Linux）\n- **Python**：≥3.7\n- **PyTorch**：≥1.7.0\n- **torchvision**：≥0.8.0\n- **其他依赖**：pyyaml、timm、可选 apex-amp（fp16 加速）\n\n## 安装步骤\n```bash\n# 1. 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer.git\ncd poolformer\n\n# 2. 安装 Python 依赖\npip install torch torchvision pyyaml timm\n# 可选：fp16 加速\npip install git+https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fapex.git\n\n# 3. 下载 ImageNet 数据集（示例脚本）\n# 国内用户可用清华镜像加速\nwget https:\u002F\u002Fgist.githubusercontent.com\u002FBIGBALLON\u002F8a71d225eff18d88e469e6ea9b39cef4\u002Fraw\u002Fextract_ILSVRC.sh\nbash extract_ILSVRC.sh \u002Fyour\u002Fpath\u002Fto\u002Fimagenet\n```\n\n## 基本使用\n### 1. 直接推理（单张图片）\n```python\nimport torch\nfrom timm.models import create_model\n\nmodel = create_model('poolformer_s12', pretrained=True)\nmodel.eval()\n\n# 预处理同 timm 标准示例\n# ...\n```\n\n### 2. 验证预训练精度\n```bash\nMODEL=poolformer_s12\npython3 validate.py \u002Fpath\u002Fto\u002Fimagenet \\\n  --model $MODEL -b 128 \\\n  --pretrained\n```\n\n### 3. 一行代码体验 Hugging Face Demo\n打开浏览器访问：  \nhttps:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fakhaliq\u002Fpoolformer\n\n### 4. Colab 零安装体验\n点击直接运行：  \n[![Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1n1UK4ihfiySTWTDuusAhm_6CLm1h4bTj?usp=sharing)","一家做智能零售柜的初创公司，需要在 8 GB 显存的 Jetson Xavier NX 上实时识别 200 种鲜食商品，同时保持 30 FPS 的吞吐，以便在人流高峰时也能秒级结算。\n\n### 没有 poolformer 时\n- 用 ResNet50 做主干，模型 98 MB，推理 22 FPS，高峰期排队 6–8 秒，顾客抱怨“扫码比人工还慢”。  \n- 换成 DeiT-Small 后精度提升 1.8 %，但显存飙到 6.7 GB，温度 75 ℃ 触发降频，帧率掉到 15 FPS。  \n- 尝试 MobileNetV3 虽轻，可 top-1 掉到 82 %，牛奶盒与酸奶盒频繁误识别，导致退款率 4 %。  \n- 团队花两周做知识蒸馏 + TensorRT 优化，代码膨胀 3 倍，仍无法在 30 FPS 与 85 % 精度之间取得平衡。\n\n### 使用 poolformer 后\n- 直接加载 poolformer_s12_1k.pth，模型 21 MB，显存占用 2.1 GB，Jetson 温度稳定在 58 ℃，推理 34 FPS，排队时间缩短到 2 秒。  \n- ImageNet 预训练权重开箱即用，微调 30 epoch 后柜内场景 top-1 达 87.3 %，牛奶\u002F酸奶误识率降至 0.7 %，退款率降到 1 % 以下。  \n- 网络仅用池化做 token mixer，参数量比 DeiT-Small 少 4×，无需蒸馏、剪枝，代码量回到 300 行，维护成本骤降。  \n- 后续升级只需换更大规模 poolformer_m36，同一套训练脚本复用，三天内完成新模型上线，不再为“模型瘦身”加班。\n\npoolformer 用极简池化验证 MetaFormer 架构的潜力，让边缘设备也能“大模型精度、小模型速度”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsail-sg_poolformer_1db80936.png","sail-sg","Sea AI Lab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsail-sg_c01d6e2a.png","",null,"https:\u002F\u002Fsail.sea.com","https:\u002F\u002Fgithub.com\u002Fsail-sg",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",99.3,{"name":88,"color":89,"percentage":90},"Shell","#89e051",0.7,1366,118,"2026-04-02T05:51:50","Apache-2.0","Linux","需要 NVIDIA GPU，显存 8GB+，CUDA 11.0+（需安装 NVIDIA apex 以支持 fp16）","未说明",{"notes":99,"python":97,"dependencies":100},"训练示例默认使用 8 张 GPU；ImageNet 数据需按指定目录结构准备；可通过 Hugging Face Spaces 或 Colab 在线体验推理",[101,102,103,104,105],"torch>=1.7.0","torchvision>=0.8.0","pyyaml","timm","fvcore",[14,26,13],[108,109,110,111,112],"transformer","mlp","pooling","image-classification","pytorch","2026-03-27T02:49:30.150509","2026-04-06T05:27:03.073656",[116,121,126,131,136,141,146],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},6182,"GroupNorm 在 group=1 时是否等价于 LayerNorm？","不完全等价。`nn.GroupNorm(num_groups=1, num_channels=C)` 与 ViT 中常用的 LayerNorm（在 token 维度上归一化）在计算均值\u002F方差的维度上不同；与 `nn.LayerNorm(normalized_shape=(C, H, W))` 的区别主要在于可学习参数的形状。作者未对后两者做实验对比，因此无法给出明确结论。","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F9",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},6183,"PoolFormer 的检测配置文件何时发布？","作者当时因缺少 3090\u002FV100 GPU 尚未完成检测实验，欢迎社区帮忙在 3090 或 V100 上测试 PoolFormer-RetinaNet 的推理速度并反馈结果。","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F3",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},6184,"是否有使用 BN 或 LN 的 PoolFormer 预训练权重？","已提供使用 LayerNorm（`norm_layer=LayerNormChannel`）的 S12 权重：poolformer_ln_s12.pth.tar，下载地址：https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1XWeScoZh8eOoWCg8qA1CvNj6SQyys7ou\u002Fview?usp=share_link","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F46",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},6185,"如何正确测量 MACs（乘加次数）？","PoolFormer 论文及多数 CV 论文中的 “FLOPs” 实际指 MACs。可使用 fvcore 的 `misc\u002Fmac_count_with_fvcore.py` 脚本测量，示例：\n```python\nfrom fvcore.nn import FlopCountMode\nmodel = timm.models.resnet50()\n# 输出 4.1G 即为 MACs\n```\n注意 ResNet-50 的 8.2G FLOPs 对应 4.1G MACs。","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F37",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},6186,"PoolFormerBlock 中为什么使用 layer_scale？","layer_scale（在 token_mixer 后乘以一个可学习标量）借鉴自 DeiT 的训练超参数，用于稳定深层网络的训练，但 PoolFormer 未使用 DeiT 的蒸馏策略。","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F35",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},6187,"Random Mixing 与 spatialfc 的区别及实现细节？","spatialfc 可视为 Random Mixing 的可学习版本，因此性能更好。Random Mixing 需后接 Softmax 归一化随机矩阵；而 spatialfc 的参数可学习，无需 Softmax。作者未实验 spatialfc+Softmax，推测两者性能相近。","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F52",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},6188,"如何将 PoolFormer 集成到 Hugging Face Transformers？","社区贡献者已完成移植并合并到 transformers 主库，文档见：https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmaster\u002Fen\u002Fmodel_doc\u002Fpoolformer 。作者后续会补充模型卡片（model cards）。","https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fpoolformer\u002Fissues\u002F26",[152],{"id":153,"version":154,"summary_zh":79,"released_at":155},105731,"v1.0","2021-11-22T04:10:04"]