[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-SHI-Labs--Neighborhood-Attention-Transformer":3,"similar-SHI-Labs--Neighborhood-Attention-Transformer":56},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":9,"readme_en":10,"readme_zh":11,"quickstart_zh":12,"use_case_zh":13,"hero_image_url":14,"owner_login":15,"owner_name":16,"owner_avatar_url":17,"owner_bio":18,"owner_company":19,"owner_location":19,"owner_email":19,"owner_twitter":20,"owner_website":21,"owner_url":22,"languages":23,"stars":32,"forks":33,"last_commit_at":34,"license":35,"difficulty_score":36,"env_os":37,"env_gpu":38,"env_ram":37,"env_deps":39,"category_tags":45,"github_topics":47,"view_count":50,"oss_zip_url":19,"oss_zip_packed_at":19,"status":51,"created_at":52,"updated_at":53,"faqs":54,"releases":55},4117,"SHI-Labs\u002FNeighborhood-Attention-Transformer","Neighborhood-Attention-Transformer","Neighborhood Attention Transformer, arxiv 2022 \u002F CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022","Neighborhood-Attention-Transformer 是一款基于滑动窗口注意力机制的高效层级视觉 Transformer 架构。它主要解决了传统全局注意力机制在处理高分辨率图像时计算量过大、显存占用过高的问题，同时克服了普通局部注意力感受野受限的缺陷。\n\n该工具的核心亮点在于引入了“邻域注意力”（Neighborhood Attention）及其进阶版“膨胀邻域注意力”（Dilated Neighborhood Attention）。前者通过限制注意力范围在局部邻域内，大幅降低了计算复杂度；后者则借鉴卷积神经网络中膨胀卷积的思想，在不增加参数量的前提下有效扩大了模型的感受野，使其能够捕捉更广阔的上下文信息。配合专为 PyTorch 开发的高性能 CUDA 扩展库 NATTEN，该架构在图像分类、语义分割、实例分割及图像生成等多个计算机视觉任务中均取得了领先的性能表现。\n\nNeighborhood-Attention-Transformer 非常适合从事计算机视觉研究的科研人员、需要部署高效模型的算法工程师以及希望探索前沿 Transformer 架构的开发者使用。如果你","Neighborhood-Attention-Transformer 是一款基于滑动窗口注意力机制的高效层级视觉 Transformer 架构。它主要解决了传统全局注意力机制在处理高分辨率图像时计算量过大、显存占用过高的问题，同时克服了普通局部注意力感受野受限的缺陷。\n\n该工具的核心亮点在于引入了“邻域注意力”（Neighborhood Attention）及其进阶版“膨胀邻域注意力”（Dilated Neighborhood Attention）。前者通过限制注意力范围在局部邻域内，大幅降低了计算复杂度；后者则借鉴卷积神经网络中膨胀卷积的思想，在不增加参数量的前提下有效扩大了模型的感受野，使其能够捕捉更广阔的上下文信息。配合专为 PyTorch 开发的高性能 CUDA 扩展库 NATTEN，该架构在图像分类、语义分割、实例分割及图像生成等多个计算机视觉任务中均取得了领先的性能表现。\n\nNeighborhood-Attention-Transformer 非常适合从事计算机视觉研究的科研人员、需要部署高效模型的算法工程师以及希望探索前沿 Transformer 架构的开发者使用。如果你正在寻找一种既能保持高精度又能兼顾推理速度的视觉骨干网络，它将是一个极具价值的选择。","# Neighborhood Attention Transformers\n\n\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FHassani_Neighborhood_Attention_Transformer_CVPR_2023_paper.html\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCVPR2023-Neighborhood%20Attention%20Transformer-%2300B0F0\" \u002F>\u003C\u002Fa>\n\n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15001\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Dilated%20Neighborhood%20Attention%20Trasnformer-%23C209C1\" \u002F>\u003C\u002Fa>\n\n[\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCUDA%20Extension-NATTEN-%23fc6562\" \u002F>](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN)\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Finstance-segmentation-on-ade20k-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-ade20k-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fpanoptic-segmentation-on-ade20k-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-ade20k-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Finstance-segmentation-on-cityscapes-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-cityscapes-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fpanoptic-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-coco-minival?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fsemantic-segmentation-on-ade20k-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-ade20k-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fsemantic-segmentation-on-cityscapes-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-cityscapes-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fpanoptic-segmentation-on-cityscapes-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-cityscapes-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Finstance-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco-minival?p=dilated-neighborhood-attention-transformer)\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fstylenat-giving-each-head-a-new-perspective\u002Fimage-generation-on-ffhq-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-ffhq-256-x-256?p=stylenat-giving-each-head-a-new-perspective)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fstylenat-giving-each-head-a-new-perspective\u002Fimage-generation-on-ffhq-1024-x-1024)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-ffhq-1024-x-1024?p=stylenat-giving-each-head-a-new-perspective)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fstylenat-giving-each-head-a-new-perspective\u002Fimage-generation-on-lsun-churches-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-lsun-churches-256-x-256?p=stylenat-giving-each-head-a-new-perspective)\n\n![NAT-Intro](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_f22bbee3f8b8.png)\n![NAT-Intro](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_57f2c04a4f37.png)\n\n**Powerful hierarchical vision transformers based on sliding window attention.**\n\nNeighborhood Attention (NA, local attention) was introduced in our original paper, \n[NAT](NAT.md), and runs efficiently with our extension to PyTorch, [NATTEN](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN).\n\nWe recently introduced a new model, [DiNAT](DiNAT.md), \nwhich extends NA by dilating neighborhoods (DiNA, sparse global attention, a.k.a. dilated local attention).\n\nCombinations of NA\u002FDiNA are capable of preserving locality, maintaining\ntranslational equivariance,\nexpanding the receptive field exponentially, \nand capturing longer-range inter-dependencies, \nleading to significant performance boosts in downstream vision tasks, such as\n[StyleNAT](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FStyleNAT) for image generation.\n\n\n# News\n\n### March 25, 2023\n* Neighborhood Attention Transformer was accepted to CVPR 2023!\n\n### November 18, 2022\n* NAT and DiNAT are now available through HuggingFace's [transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers).\n  * NAT and DiNAT classification models are also available on the HuggingFace's Model Hub: [NAT](https:\u002F\u002Fhuggingface.co\u002Fmodels?filter=nat) | [DiNAT](https:\u002F\u002Fhuggingface.co\u002Fmodels?filter=dinat)\n\n### November 11, 2022\n* New preprint: [StyleNAT: Giving Each Head a New Perspective](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FStyleNAT).\n  * Style-based GAN powered with Neighborhood Attention sets new SOTA on FFHQ-256 with a 2.05 FID.\n  ![stylenat](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_a574a8ac9dda.png)\n\n### October 8, 2022\n* [NATTEN](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN) is now [available as a pip package](https:\u002F\u002Fwww.shi-labs.com\u002Fnatten\u002F)!\n    * You can now install NATTEN with pre-compiled wheels, and start using it in seconds. \n    * NATTEN will be maintained and developed as a [separate project](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN) to support broader usage of sliding window attention, even beyond computer vision.\n\n### September 29, 2022\n* New preprint: [Dilated Neighborhood Attention Transformer](DiNAT.md).\n\n\n# Dilated Neighborhood Attention :fire:\n![DiNAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_d293fbb20ee9.png)\n![DiNAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_f860269aa2de.png)\n\nA new hierarchical vision transformer based on Neighborhood Attention (local attention) and Dilated Neighborhood Attention (sparse global attention) that enjoys significant performance boost in downstream tasks.\n\nCheck out the [DiNAT README](DiNAT.md).\n\n\n# Neighborhood Attention Transformer\n![NAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_b5b0c07366d5.png)\n![NAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_5a0ad63d2ba4.png)\n\nOur original paper, [Neighborhood Attention Transformer (NAT)](NAT.md), the first efficient sliding-window local attention.\n\n# How Neighborhood Attention works\nNeighborhood Attention localizes the query token's (red) receptive field to its nearest neighboring tokens in the key-value pair (green). \nThis is equivalent to dot-product self attention when the neighborhood size is identical to the image dimensions. \nNote that the edges are special (edge) cases.\n\n![720p_fast_dm](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_dec242304ecc.png)\n![720p_fast_lm](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_7df2c072692f.png)\n\n\n\n# Citation\n```bibtex\n@inproceedings{hassani2023neighborhood,\n\ttitle        = {Neighborhood Attention Transformer},\n\tauthor       = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},\n\tbooktitle    = {Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n\tmonth        = {June},\n\tyear         = {2023},\n\tpages        = {6185-6194}\n}\n@article{hassani2022dilated,\n\ttitle        = {Dilated Neighborhood Attention Transformer},\n\tauthor       = {Ali Hassani and Humphrey Shi},\n\tyear         = 2022,\n\turl          = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15001},\n\teprint       = {2209.15001},\n\tarchiveprefix = {arXiv},\n\tprimaryclass = {cs.CV}\n}\n@article{walton2022stylenat,\n\ttitle        = {StyleNAT: Giving Each Head a New Perspective},\n\tauthor       = {Steven Walton and Ali Hassani and Xingqian Xu and Zhangyang Wang and Humphrey Shi},\n\tyear         = 2022,\n\turl          = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.05770},\n\teprint       = {2211.05770},\n\tarchiveprefix = {arXiv},\n\tprimaryclass = {cs.CV}\n}\n```\n","# 邻域注意力Transformer\n\n\u003Ca href=\"https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fhtml\u002FHassani_Neighborhood_Attention_Transformer_CVPR_2023_paper.html\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCVPR2023-Neighborhood%20Attention%20Transformer-%2300B0F0\" \u002F>\u003C\u002Fa>\n\n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15001\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Dilated%20Neighborhood%20Attention%20Trasnformer-%23C209C1\" \u002F>\u003C\u002Fa>\n\n[\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCUDA%20Extension-NATTEN-%23fc6562\" \u002F>](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN)\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Finstance-segmentation-on-ade20k-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-ade20k-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fpanoptic-segmentation-on-ade20k-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-ade20k-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Finstance-segmentation-on-cityscapes-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-cityscapes-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fpanoptic-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-coco-minival?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fsemantic-segmentation-on-ade20k-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-ade20k-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fsemantic-segmentation-on-cityscapes-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fsemantic-segmentation-on-cityscapes-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Fpanoptic-segmentation-on-cityscapes-val)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fpanoptic-segmentation-on-cityscapes-val?p=dilated-neighborhood-attention-transformer)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdilated-neighborhood-attention-transformer\u002Finstance-segmentation-on-coco-minival)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Finstance-segmentation-on-coco-minival?p=dilated-neighborhood-attention-transformer)\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fstylenat-giving-each-head-a-new-perspective\u002Fimage-generation-on-ffhq-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-ffhq-256-x-256?p=stylenat-giving-each-head-a-new-perspective)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fstylenat-giving-each-head-a-new-perspective\u002Fimage-generation-on-ffhq-1024-x-1024)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-ffhq-1024-x-1024?p=stylenat-giving-each-head-a-new-perspective)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fstylenat-giving-each-head-a-new-perspective\u002Fimage-generation-on-lsun-churches-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-lsun-churches-256-x-256?p=stylenat-giving-each-head-a-new-perspective)\n\n![NAT-Intro](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_f22bbee3f8b8.png)\n![NAT-Intro](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_57f2c04a4f37.png)\n\n**基于滑动窗口注意力的强大分层视觉Transformer模型。**\n\n邻域注意力（NA，局部注意力）在我们的原始论文中被提出，即[NAT](NAT.md)，并可通过我们对PyTorch的扩展[NATTEN](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN)高效运行。\n\n我们最近又提出了一个新的模型，[DiNAT](DiNAT.md)，它通过扩张邻域来扩展NA（DiNA，稀疏全局注意力，又称扩张的局部注意力）。\n\nNA\u002FDiNA的组合能够保持局部性、维持平移等变性、以指数级方式扩大感受野，并捕捉更长距离的相互依赖关系，从而在下游视觉任务中带来显著的性能提升，例如用于图像生成的[StyleNAT](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FStyleNAT)。\n\n\n# 新闻\n\n### 2023年3月25日\n* 邻域注意力Transformer已被CVPR 2023接收！\n\n### 2022年11月18日\n* NAT和DiNAT现已通过HuggingFace的[transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)库提供。\n  * NAT和DiNAT的分类模型也在HuggingFace模型库中上线：[NAT](https:\u002F\u002Fhuggingface.co\u002Fmodels?filter=nat) | [DiNAT](https:\u002F\u002Fhuggingface.co\u002Fmodels?filter=dinat)\n\n### 2022年11月11日\n* 新预印本：[StyleNAT：为每个头赋予全新视角](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FStyleNAT)。\n  * 基于邻域注意力的风格化GAN在FFHQ-256数据集上以2.05的FID值创下新的SOTA记录。\n  ![stylenat](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_a574a8ac9dda.png)\n\n### 2022年10月8日\n* [NATTEN](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN)现已作为pip包发布！\n    * 现在你可以通过预编译的wheel文件安装NATTEN，并在几秒钟内开始使用。\n    * NATTEN将作为一个独立项目维护和发展，以支持更广泛的滑动窗口注意力应用，甚至超越计算机视觉领域。\n\n### 2022年9月29日\n* 新预印本：[扩张邻域注意力Transformer](DiNAT.md)。\n\n\n# 张化邻域注意力 :fire:\n![DiNAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_d293fbb20ee9.png)\n![DiNAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_f860269aa2de.png)\n\n一种基于邻域注意力（局部注意力）和扩张邻域注意力（稀疏全局注意力）的新分层视觉Transformer模型，在下游任务中表现出显著的性能提升。\n\n请查看[DiNAT README](DiNAT.md)。\n\n\n# 邻域注意力Transformer\n![NAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_b5b0c07366d5.png)\n![NAT-Abs](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_5a0ad63d2ba4.png)\n\n我们的原始论文，[邻域注意力Transformer (NAT)](NAT.md)，是首个高效的滑动窗口局部注意力机制。\n\n# 邻域注意力的工作原理\n邻域注意力会将查询token（红色）的感受野限定在其键值对（绿色）中的最近邻近token范围内。当邻域大小与图像尺寸相同时，这等同于点积自注意力。请注意，边缘区域属于特殊情况。\n\n![720p_fast_dm](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_dec242304ecc.png)\n![720p_fast_lm](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_readme_7df2c072692f.png)\n\n# 引用\n```bibtex\n@inproceedings{hassani2023neighborhood,\n\ttitle        = {邻域注意力Transformer},\n\tauthor       = {阿里·哈萨尼和史蒂文·沃尔顿和李嘉辰和李申和希普里·施伊},\n\tbooktitle    = {IEEE\u002FCVF计算机视觉与模式识别会议（CVPR）论文集},\n\tmonth        = {6月},\n\tyear         = {2023},\n\tpages        = {6185-6194}\n}\n@article{hassani2022dilated,\n\ttitle        = {扩张邻域注意力Transformer},\n\tauthor       = {阿里·哈萨尼和希普里·施伊},\n\tyear         = 2022,\n\turl          = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15001},\n\teprint       = {2209.15001},\n\tarchiveprefix = {arXiv},\n\tprimaryclass = {cs.CV}\n}\n@article{walton2022stylenat,\n\ttitle        = {StyleNAT：为每个头提供新视角},\n\tauthor       = {史蒂文·沃尔顿和阿里·哈萨尼和徐兴谦和王张阳和希普里·施伊},\n\tyear         = 2022,\n\turl          = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.05770},\n\teprint       = {2211.05770},\n\tarchiveprefix = {arXiv},\n\tprimaryclass = {cs.CV}\n}\n```","# Neighborhood Attention Transformer 快速上手指南\n\nNeighborhood Attention Transformer (NAT) 是一种基于滑动窗口注意力机制的高效分层视觉 Transformer。它通过局部注意力（Neighborhood Attention）和扩张局部注意力（Dilated Neighborhood Attention）在保持平移等变性的同时，有效扩大感受野，显著提升图像分类、分割及生成等下游任务的性能。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 macOS。Windows 支持取决于 NATTEN 的编译环境。\n*   **Python**: 3.8 或更高版本。\n*   **PyTorch**: 1.9 或更高版本（建议安装与您的 CUDA 版本匹配的最新版）。\n*   **CUDA**: 如果使用 GPU 加速，需安装对应的 CUDA Toolkit 和 cuDNN。\n*   **编译器**: 需要安装 `g++` 和 `nvcc` (若使用 GPU) 以编译 CUDA 扩展。\n\n**前置依赖检查：**\n确保已安装基础深度学习库：\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n# 请将 cu118 替换为您实际的 CUDA 版本，如 cu121, cpu 等\n```\n\n## 安装步骤\n\nNAT 的核心算子依赖于 **NATTEN** (Neighborhood Attention Extension)。为了获得最佳性能，强烈建议通过预编译的 wheel 包进行安装。\n\n### 1. 安装 NATTEN\n您可以直接从 PyPI 安装预编译版本（最快方式）：\n\n```bash\npip install natten\n```\n\n*注：如果上述命令失败或需要特定版本，请访问 [NATTEN GitHub](https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNATTEN) 获取针对特定 CUDA 版本的安装指令。国内用户若遇到下载慢的问题，可尝试使用清华源或阿里源：*\n```bash\npip install natten -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 安装 Transformers 集成 (可选但推荐)\n自 2022 年 11 月起，NAT 和 DiNAT 模型已集成到 Hugging Face `transformers` 库中。如果您希望通过 HF 接口调用模型：\n\n```bash\npip install transformers\n```\n\n*国内加速方案：*\n```bash\npip install transformers -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\nexport HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com\n```\n\n### 3. 克隆源码 (用于训练或研究底层代码)\n如果您需要运行官方提供的训练脚本或研究具体实现：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FSHI-Labs\u002FNeighborhood-Attention-Transformer.git\ncd Neighborhood-Attention-Transformer\npip install -r requirements.txt\n```\n\n## 基本使用\n\n### 方式一：通过 Hugging Face Transformers 加载预训练模型\n这是最简单的推理方式，适用于图像分类任务。\n\n```python\nfrom transformers import AutoImageProcessor, AutoModelForImageClassification\nfrom PIL import Image\nimport torch\n\n# 加载处理器和模型 (以 NAT-Tiny 为例)\nmodel_name = \"shi-labs\u002Fnat-tiny-224\" # 可在 HuggingFace Hub 查找更多变体：nat-small, nat-base, dinat 等\nprocessor = AutoImageProcessor.from_pretrained(model_name)\nmodel = AutoModelForImageClassification.from_pretrained(model_name)\n\n# 准备图像\nimage = Image.open(\"your_image.jpg\").convert(\"RGB\")\n\n# 预处理并推理\ninputs = processor(images=image, return_tensors=\"pt\")\nwith torch.no_grad():\n    outputs = model(**inputs)\n    logits = outputs.logits\n\n# 获取预测结果\npredicted_class_idx = logits.argmax(-1).item()\nprint(f\"Predicted class: {model.config.id2label[predicted_class_idx]}\")\n```\n\n### 方式二：直接使用 NATTEN 构建自定义层\n如果您想在自定义模型架构中使用 Neighborhood Attention 算子：\n\n```python\nimport torch\nfrom natten import NeighborhoodAttention2d\n\n# 定义参数\ndim = 64          # 嵌入维度\nkernel_size = 7   # 邻域大小 (k)\ndilation = 1      # 膨胀率 (1 为普通局部注意力，>1 为扩张注意力)\nnum_heads = 8     # 注意力头数\n\n# 初始化注意力层\nna_layer = NeighborhoodAttention2d(\n    dim=dim,\n    kernel_size=kernel_size,\n    dilation=dilation,\n    num_heads=num_heads\n)\n\n# 模拟输入张量 (Batch, Channels, Height, Width)\n# 注意：NATTEN 通常期望输入格式为 (B, H, W, C) 或在内部处理转换，具体请参考 natten 文档\n# 此处演示标准卷积风格输入，实际使用时请根据 natten 最新文档确认输入维度要求\n# 对于 v0.14+，输入通常为 (B, H, W, C)\nx = torch.randn(2, 32, 32, dim) \n\n# 前向传播\noutput = na_layer(x)\n\nprint(f\"Input shape: {x.shape}\")\nprint(f\"Output shape: {output.shape}\")\n```\n\n### 方式三：加载 DiNAT (扩张邻域注意力) 模型\nDiNAT 是 NAT 的进阶版，结合了稀疏全局注意力，适合更复杂的场景。\n\n```python\nfrom transformers import AutoModelForImageClassification\n\n# 加载 DiNAT 模型\nmodel_name = \"shi-labs\u002Fdinat-tiny-224\"\nmodel = AutoModelForImageClassification.from_pretrained(model_name)\n\nprint(f\"Loaded model: {model_name}\")\n# 后续推理步骤与上述 NAT 示例相同\n```","某自动驾驶感知团队正在开发城市道路实时语义分割系统，需在车载边缘设备上精准识别车道线、行人及车辆。\n\n### 没有 Neighborhood-Attention-Transformer 时\n- **局部细节丢失严重**：传统全局注意力机制或大卷积核难以兼顾长距离依赖与精细局部特征，导致小目标（如远处行人）漏检率高。\n- **推理延迟过高**：标准 Transformer 的计算复杂度随图像分辨率呈平方级增长，在高分辨率输入下无法满足车载芯片的实时性要求（>100ms\u002F帧）。\n- **显存占用爆炸**：处理 1080P 路况视频时，注意力矩阵占用大量显存，迫使团队降低输入分辨率，牺牲了关键的道路边缘检测精度。\n- **多尺度适配困难**：现有模型难以灵活调整感受野，面对近处大车与远处小车的尺度剧烈变化时，分割边界模糊不清。\n\n### 使用 Neighborhood-Attention-Transformer 后\n- **局部特征精准捕捉**：利用滑动窗口邻域注意力机制，模型能高效聚焦局部上下文，显著提升了车道线连续性和小目标识别准确率。\n- **线性复杂度加速**：借助 NATTEN CUDA 扩展，计算复杂度降为线性关系，在同等硬件上将推理速度提升至 35ms\u002F帧，满足实时控制需求。\n- **高分辨率低显存运行**：无需降低输入画质即可在有限显存中运行，保留了丰富的路面纹理细节，使夜间弱光场景下的分割效果更鲁棒。\n- **动态感受野调节**：通过膨胀邻域注意力（Dilated NA）灵活覆盖不同尺度物体，无论是近处公交车还是远处骑行者，边缘分割均清晰锐利。\n\nNeighborhood-Attention-Transformer 通过高效的局部注意力机制，成功解决了高分辨率视觉任务中精度与速度的不可调和矛盾。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSHI-Labs_Neighborhood-Attention-Transformer_f22bbee3.png","SHI-Labs","SHI Labs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSHI-Labs_896df7d2.jpg","Computer Vision, Machine Learning, and AI Systems & Applications",null,"humphrey_shi","https:\u002F\u002Fwww.shi-labs.com","https:\u002F\u002Fgithub.com\u002FSHI-Labs",[24,28],{"name":25,"color":26,"percentage":27},"Python","#3572A5",99.6,{"name":29,"color":30,"percentage":31},"Shell","#89e051",0.4,1176,89,"2026-03-30T12:08:44","MIT",3,"未说明","必需 NVIDIA GPU（需编译 CUDA 扩展 NATTEN），具体型号、显存大小及 CUDA 版本未在文中明确说明",{"notes":40,"python":37,"dependencies":41},"该工具核心依赖名为 NATTEN 的自定义 CUDA 扩展以实现高效的滑动窗口注意力机制。虽然可以通过 pip 安装预编译的 NATTEN wheel，但仍需要兼容的 NVIDIA GPU 和 CUDA 环境。项目支持通过 HuggingFace transformers 库加载模型。",[42,43,44],"PyTorch","NATTEN (CUDA Extension)","transformers (可选，用于 HuggingFace 集成)",[46],"开发框架",[48,49],"neighborhood-attention","pytorch",2,"ready","2026-03-27T02:49:30.150509","2026-04-06T12:11:18.542248",[],[],[57,67,76,84,93,101],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":36,"last_commit_at":63,"category_tags":64,"status":51},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[46,65,66],"图像","Agent",{"id":68,"name":69,"github_repo":70,"description_zh":71,"stars":72,"difficulty_score":50,"last_commit_at":73,"category_tags":74,"status":51},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,"2026-04-05T23:32:43",[46,66,75],"语言模型",{"id":77,"name":78,"github_repo":79,"description_zh":80,"stars":81,"difficulty_score":50,"last_commit_at":82,"category_tags":83,"status":51},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[46,65,66],{"id":85,"name":86,"github_repo":87,"description_zh":88,"stars":89,"difficulty_score":36,"last_commit_at":90,"category_tags":91,"status":51},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[46,65,66,92],"视频",{"id":94,"name":95,"github_repo":96,"description_zh":97,"stars":98,"difficulty_score":50,"last_commit_at":99,"category_tags":100,"status":51},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[46,75],{"id":102,"name":103,"github_repo":104,"description_zh":105,"stars":106,"difficulty_score":50,"last_commit_at":107,"category_tags":108,"status":51},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[65,109,92,110,66,111,75,46,112],"数据工具","插件","其他","音频"]