[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-thu-ml--SageAttention":3,"tool-thu-ml--SageAttention":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":103,"forks":104,"last_commit_at":105,"license":106,"difficulty_score":10,"env_os":107,"env_gpu":108,"env_ram":109,"env_deps":110,"category_tags":117,"github_topics":118,"view_count":131,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":132,"updated_at":133,"faqs":134,"releases":160},191,"thu-ml\u002FSageAttention","SageAttention","[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.","SageAttention 是一套高效、高精度的注意力机制加速方案，专为大模型推理设计。它通过创新的量化技术（如 INT8、FP8 甚至 FP4）对注意力计算中的关键部分进行压缩与优化，在主流 GPU（如 Ampere、Ada、Hopper 架构）上实现比 FlashAttention 快 2 到 5 倍的速度，同时几乎不损失语言、图像或视频模型的最终性能。SageAttention 系列（包括 SageAttention、SageAttention2 及其增强版 SageAttention2++ 和支持训练探索的 SageAttention3）采用“即插即用”设计，无需重新训练模型即可直接集成到现有项目中。其核心技术亮点包括针对异常值的平滑处理、细粒度量化策略以及两级累加机制，有效兼顾速度与精度。该工具主要面向 AI 开发者和研究人员，尤其适合需要在有限硬件资源下高效部署大模型的场景。","# SageAttention\n\u003C!-- We are continuously updating more features. You could **Star** and **Watch** our repository to stay updated.\n\n--- -->\nThis repository provides the official implementation of SageAttention, SageAttention2, and SageAttention2++, which achieve surprising speedup on most GPUs without lossing accuracy across all models in a plug-and-play way.\n\n**SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration**  \nJintao Zhang, Jia Wei, Haofeng Huang, Pengle Zhang, Jun Zhu, Jianfei Chen  \nPaper: https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02367\n\n**SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization**  \nJintao Zhang, Haofeng Huang, Pengle Zhang, Jia Wei, Jun Zhu, Jianfei Chen  \nPaper: https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10958\n\n**SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training**  \nJintao Zhang, Jia Wei, Haoxu Wang, Pengle Zhang, Xiaoming Xu, Haofeng Huang, Kai Jiang, Jianfei Chen, Jun Zhu  \nPaper: https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594\n\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_0c8bd6768821.png)\n*Note: [SageAttention2++](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.21136) achieves higher speed while maintaining the same accuracy performance.*\n\n## Current Features\n\u003C!-- This is a beta release of SageAttention2. We welcome any feedback on accuracy, performance issues, bugs, feature requests, or suggestions. Please feel free to open an issue or launch a pull request! -->\n\n+ Optmized kernels for **Ampere, Ada and Hopper GPUs.**\n+ INT8 quantization and smoothing for $QK^\\top$ with support for varying granularities.\n+ FP8 quantization for $PV$, and FP16 accumulator for FP8\u002FFP16 $PV$.\n+ Two-level accumulation strategy for $PV$ to improve accuracy in FP8 MMA and WGMMA.\n+ Support `torch.compile` with non-cudagraphs mode and distributed inference.\n\n\n## Project Updates\n- [2025-09-27]: 🎉 [SageAttention3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594) is accepted by NeurIPS 2025 as a **Spotlight** paper! \n- [2025-09-27]: The code of [SageAttention3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594) is released in this repository at  [sageattention3_blackwell](.\u002Fsageattention3_blackwell\u002F). We would still greatly appreciate it if you could take a moment to fill out the Form in [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjt-zhang\u002FSageAttention3). Please note that since SageAttention2 is more accurate, we still recommend using SageAttention2 for precision-sensitive applications.\n- [2025-07-01]: The code of [SageAttention2++](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.21136) is released in this repository. We would still greatly appreciate it if you could take a moment to fill out the Form in [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjt-zhang\u002FSageAttention2_plus). Thank you very much!\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_f90cfdcbae0f.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_48dea504ac39.png)\n\n- [2025-06-19]: [Sparse SageAttention1 API](https:\u002F\u002Fgithub.com\u002Fjt-zhang\u002FSparse_SageAttention_API) and [Sparse SageAttention2 API](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn) can compute attention with any block sparse pattern very fast.\n- [2025-05-02]: 🎉SageAttention2 and [SpargeAttn](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn) are accepted by ICML 2025! \n- [2025-02-25]: 🔥 We release [SpargeAttn](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn), a sparse attention based on SageAttention2, which could acclerate any model without training.\n- [2025-02-15]: 🔥 The compilation code is updated to support RTX5090! On RTX5090, SageAttention reaches 560T, 2.7x faster than FlashAttention2!\n- [2025-01-28]: 🔥⚡SageAttention is now available on Hopper GPUs (H100, H800, H20)! It matches the speed of FlashAttention3-FP8 but offers **much better accuracy!**\n\n| **FlashAttention2** | **FlashAttention3** | **FlashAttention3-FP8** | **SageAttention** |\n|----------------------|----------------------|----------------------|----------------------|\n| ![FlashAttention2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cd164054765d.gif) | ![FlashAttention3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cd164054765d.gif)  | ![FlashAttention3-FP8](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_e97f4a2664ee.gif) | ![SageAttention](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_aa9804ea6070.gif) |\n| **25'34''** | **17'32''** | **12'14''** | **12'07''** |\n\n*Results for [CogVideoX1.5-5B](https:\u002F\u002Fhuggingface.co\u002FTHUDM\u002FCogVideoX1.5-5B) on NVIDIA H20 GPU*\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_8c6fc63910ad.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cc68282cbf17.png)\n\n- [2025-01-24]: 🎉SageAttention is accepted by ICLR 2025! \n- [2024-12-20]: 🔥Update the [SageAttention2 Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10958).   \n\n- [2024-12-20]: 🔥Release SageAttention 2.0.1 Beta! In this version, we introduce a new feature: per-thread quantization, which offers finer granularity while maintaining hardware efficiency.\n- [2024-11-21]: 🔥SageAttention 2.0.0 beta is released! Now SageAttention has measured speedup on L20, L40, A100, A800, and A6000, RTX3090 and RTX4090.\n- [2024-11-12]: Support for `sageattn_varlen` is available now.\n- [2024-11-11]: Support for different sequence lengths between `q` and `k,v`,  `(batch_size, head_num, seq_len, head_dim)` or `(batch_size, seq_len, head_num, head_dim)` input shapes, and `group-query attention` is available now.\n\n\n## Installation\n### Base environment\n+ `python>=3.9`   , `torch>=2.3.0`  , `triton>=3.0.0` \n- `CUDA`:\n  + `>=12.8` for Blackwell or SageAttention2++\n  + `>=12.4` for fp8 support on Ada\n  + `>=12.3` for fp8 support on Hopper\n  + `>=12.0` for Ampere\n+ `flash-attn` for benchmarking\n\n### Install Package\n\nFor SageAttention V1 in Triton (slower than SageAttention V2\u002FV2++\u002FV3), refer to [SageAttention-1](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Ftree\u002Fsageattention-1) branch and install using pip: `pip install sageattention==1.0.6`\n\nTo use SageAttention 2.2.0 (containing SageAttention2++), you can install using pip:\n```\npip install sageattention==2.2.0 --no-build-isolation\n```\n\n**Or** you can compile from source:\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention.git\ncd SageAttention \nexport EXT_PARALLEL=4 NVCC_APPEND_FLAGS=\"--threads 8\" MAX_JOBS=32 # Optional\npython setup.py install\n```\n\nTo benchmark the speed against FlashAttention3, please compile FlashAttention3 from source:\n```\ngit clone https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention.git --recursive\ngit checkout b7d29fb3b79f0b78b1c369a52aaa6628dabfb0d7 # 2.7.2 release\ncd hopper\npython setup.py install\n```\n\n## How to Use\n```python\nfrom sageattention import sageattn\nattn_output = sageattn(q, k, v, tensor_layout=\"HND\", is_causal=False)\n```\n+ `q, k, v` are **FP16\u002FBF16** dtype with the shape `(batch_size, head_num, seq_len, head_dim)` using default `tensor_layout=\"HND\"`. For shape `(batch_size, seq_len, head_num, head_dim)`, set `tensor_layout=\"NHD\"`. \n+ `is_causal` determines the use of a causal mask.\n\n### Available APIs:\n+ `sageattn`: Automatically selects the optimal kernel based on the GPU to achieve a good performance-accuracy trade-off.\n+ `sageattn_qk_int8_pv_fp16_triton`: INT8 quantization for $QK^\\top$ and FP16 for $PV$ using Triton backend.\n+ `sageattn_qk_int8_pv_fp16_cuda`: INT8 quantization for $QK^\\top$ and FP16 for $PV$ using CUDA backend.\n+ `sageattn_qk_int8_pv_fp8_cuda`: INT8 quantization for $QK^\\top$ and FP8 for $PV$ using CUDA backend. (Note that setting `pv_accum_dtype=fp32+fp16` corresponds to SageAttention2++.)\n+ `sageattn_qk_int8_pv_fp8_cuda_sm90`: INT8 quantization for $QK^\\top$ and FP8 for $PV$ using CUDA backend, specifically optimized for Hopper GPUs.\n+ `sageattn_varlen`: INT8 quantization for $QK^\\top$ and FP16 for $PV$ using Triton backend. Support for varying sequence lengths within the same batch.\n\nFor optimal speed and accuracy performance on custom devices and models, we strongly recommend referring to the [this file](.\u002Fsageattention\u002Fcore.py) for detailed guidance.\n\n> **Note:**\nSupport for different sequence lengths between `q` and `k,v` and `group-query attention` is available.\n\n\n### Plug-and-play Example\n\nWe can replace `scaled_dot_product_attention` easily. \nWe will take [CogvideoX](https:\u002F\u002Fhuggingface.co\u002Fzai-org\u002FCogVideoX-2b) as an example:\n\nAdd the following codes and run\n```diff\nimport torch.nn.functional as F\n\n+ from sageattention import sageattn\n+ F.scaled_dot_product_attention = sageattn\n\n```\n\nSpecifically,\n\n```bash\ncd example\npython cogvideox_infer.py --model cogvideox-2b --compile --attention_type sage\n```\n\n**You can get a lossless video in** `.\u002Fexample\u002Fvideos\u002F\u003Cmodel>\u002F\u003Cattention_type>\u002F` **faster than by using** `--attention_type sdpa`. More examples and guidance can be found under the `example\u002F` directory.\n\n> **Note:** Not all models works with `F.scaled_dot_product_attention = sageattn`. Technically, you should replace the original Attention by modifying the `Attention Class` of the target model. For image and video models, we suggest only replacing the attention in DiT (see `example\u002Fmodify_mochi.py` for detail).\n\n### Kernel Benchmarking\nWe provide a benchmarking script to compare the speed of different kernels including SageAttention, FlashAttention2 and FlashAttention3. Please refer to the `benchmark\u002F` directory for more details.\n \n## Performance\n### Speed of Kernels\n\n`8+8` means the kernel with INT8 quantization for $QK^\\top$ and FP8 quantization for $PV$. `8+16` uses FP16 with FP16 accumulator for $PV$.\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_f90cfdcbae0f.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_48dea504ac39.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_a544baf0f646.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_d2d2f4158317.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_8c6fc63910ad.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cc68282cbf17.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_681d995c55cf.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_e1cbd11421fa.png)\n\n> **Note:** The TOPS results refer only to the Attention Kernel, excluding the quantization and smoothing.\n\n### End-to-end Performance\n#### **End-to-End Accuracy:**\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_eae3a7c560f4.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_178a5743fb2d.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_208767f9975e.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_f0630bbf4909.png)\n\n#### **End-to-End Speedup:**\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_bc796018f4e4.png)\n*Note: SageAttention2++ achieves higher speed.*\n\n## Citation\n**If you use this code or find our work valuable, please cite:**\n```\n@inproceedings{zhang2025sageattention,\n  title={SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration}, \n  author={Zhang, Jintao and Wei, Jia and Zhang, Pengle and Zhu, Jun and Chen, Jianfei},\n  booktitle={International Conference on Learning Representations (ICLR)},\n  year={2025}\n}\n@inproceedings{zhang2024sageattention2,\n  title={Sageattention2: Efficient attention with thorough outlier smoothing and per-thread int4 quantization},\n  author={Zhang, Jintao and Huang, Haofeng and Zhang, Pengle and Wei, Jia and Zhu, Jun and Chen, Jianfei},\n  booktitle={International Conference on Machine Learning (ICML)},\n  year={2025}\n}\n@article{zhang2025sageattention2++,\n  title={Sageattention2++: A more efficient implementation of sageattention2},\n  author={Zhang, Jintao and Xu, Xiaoming and Wei, Jia and Huang, Haofeng and Zhang, Pengle and Xiang, Chendong and Zhu, Jun and Chen, Jianfei},\n  journal={arXiv preprint arXiv:2505.21136},\n  year={2025}\n}\n@article{zhang2025sageattention3,\n  title={SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training},\n  author={Zhang, Jintao and Wei, Jia and Zhang, Pengle and Xu, Xiaoming and Huang, Haofeng and Wang, Haoxu and Jiang, Kai and Zhu, Jun and Chen, Jianfei},\n  journal={arXiv preprint arXiv:2505.11594},\n  year={2025}\n}\n```\n","# SageAttention\n\u003C!-- 我们正在持续更新更多功能。你可以 **Star** 并 **Watch** 本仓库以获取最新动态。\n\n--- -->\n本仓库提供了 SageAttention、SageAttention2 和 SageAttention2++ 的官方实现，这些方法在大多数 GPU 上实现了显著加速，且在所有模型中均无精度损失，并支持即插即用（plug-and-play）。\n\n**SageAttention: 面向即插即用推理加速的高精度 8 位注意力机制**  \n张金涛、魏佳、黄浩峰、张鹏乐、朱军、陈健飞  \n论文：https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02367\n\n**SageAttention2: 基于彻底异常值平滑与每线程 INT4 量化的高效注意力机制**  \n张金涛、黄浩峰、张鹏乐、魏佳、朱军、陈健飞  \n论文：https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10958\n\n**SageAttention3: 面向推理的微缩放 FP4 注意力机制及对 8 位训练的探索**  \n张金涛、魏佳、王浩旭、张鹏乐、徐晓明、黄浩峰、姜凯、陈健飞、朱军  \n论文：https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594\n\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_0c8bd6768821.png)\n*注：[SageAttention2++](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.21136) 在保持相同精度性能的同时实现了更高的速度。*\n\n## 当前特性\n\u003C!-- 这是 SageAttention2 的 Beta 版本。我们欢迎任何关于精度、性能问题、Bug、功能请求或建议的反馈。请随时提交 issue 或 pull request！ -->\n\n+ 针对 **Ampere、Ada 和 Hopper GPU** 的优化内核。\n+ 对 $QK^\\top$ 支持多种粒度的 INT8 量化和平滑处理。\n+ 对 $PV$ 支持 FP8 量化，并为 FP8\u002FFP16 $PV$ 提供 FP16 累加器。\n+ 采用两级累加策略（two-level accumulation strategy）提升 FP8 MMA 和 WGMMA 中 $PV$ 的精度。\n+ 支持 `torch.compile`（非 cudagraphs 模式）和分布式推理。\n\n## 项目更新\n- [2025-09-27]：🎉 [SageAttention3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594) 被 NeurIPS 2025 接收为 **Spotlight** 论文！\n- [2025-09-27]：[SageAttention3](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594) 的代码已在本仓库的 [sageattention3_blackwell](.\u002Fsageattention3_blackwell\u002F) 目录中发布。若您能花一点时间填写 [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjt-zhang\u002FSageAttention3) 上的表单，我们将不胜感激。请注意，由于 SageAttention2 精度更高，我们仍推荐在对精度敏感的应用中使用 SageAttention2。\n- [2025-07-01]：[SageAttention2++](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2505.21136) 的代码已在本仓库中发布。若您能花一点时间填写 [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjt-zhang\u002FSageAttention2_plus) 上的表单，我们将不胜感激。非常感谢！\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_f90cfdcbae0f.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_48dea504ac39.png)\n\n- [2025-06-19]：[Sparse SageAttention1 API](https:\u002F\u002Fgithub.com\u002Fjt-zhang\u002FSparse_SageAttention_API) 和 [Sparse SageAttention2 API](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn) 可以极快地计算任意块稀疏模式（block sparse pattern）下的注意力。\n- [2025-05-02]：🎉 SageAttention2 和 [SpargeAttn](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn) 被 ICML 2025 接收！\n- [2025-02-25]：🔥 我们发布了 [SpargeAttn](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn)，这是一种基于 SageAttention2 的稀疏注意力机制，无需训练即可加速任意模型。\n- [2025-02-15]：🔥 编译代码已更新，支持 RTX5090！在 RTX5090 上，SageAttention 达到 560T，比 FlashAttention2 快 2.7 倍！\n- [2025-01-28]：🔥⚡ SageAttention 现已支持 Hopper GPU（H100、H800、H20）！其速度与 FlashAttention3-FP8 相当，但**精度显著更高！**\n\n| **FlashAttention2** | **FlashAttention3** | **FlashAttention3-FP8** | **SageAttention** |\n|----------------------|----------------------|----------------------|----------------------|\n| ![FlashAttention2](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cd164054765d.gif) | ![FlashAttention3](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cd164054765d.gif)  | ![FlashAttention3-FP8](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_e97f4a2664ee.gif) | ![SageAttention](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_aa9804ea6070.gif) |\n| **25'34''** | **17'32''** | **12'14''** | **12'07''** |\n\n*NVIDIA H20 GPU 上 [CogVideoX1.5-5B](https:\u002F\u002Fhuggingface.co\u002FTHUDM\u002FCogVideoX1.5-5B) 的结果*\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_8c6fc63910ad.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cc68282cbf17.png)\n\n- [2025-01-24]：🎉 SageAttention 被 ICLR 2025 接收！\n- [2024-12-20]：🔥 更新了 [SageAttention2 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10958)。\n\n- [2024-12-20]：🔥 发布 SageAttention 2.0.1 Beta！此版本引入了新特性：每线程量化（per-thread quantization），在保持硬件效率的同时提供更细粒度的量化。\n- [2024-11-21]：🔥 SageAttention 2.0.0 beta 发布！目前 SageAttention 已在 L20、L40、A100、A800、A6000、RTX3090 和 RTX4090 上测得加速效果。\n- [2024-11-12]：现已支持 `sageattn_varlen`。\n- [2024-11-11]：现已支持 `q` 与 `k, v` 具有不同的序列长度、输入形状为 `(batch_size, head_num, seq_len, head_dim)` 或 `(batch_size, seq_len, head_num, head_dim)`，以及**分组查询注意力（group-query attention）**。\n\n## 安装\n### 基础环境\n+ `python>=3.9`、`torch>=2.3.0`、`triton>=3.0.0`\n- `CUDA`:\n  + Blackwell 或 SageAttention2++ 需要 `>=12.8`\n  + Ada 架构上支持 fp8 需要 `>=12.4`\n  + Hopper 架构上支持 fp8 需要 `>=12.3`\n  + Ampere 架构需要 `>=12.0`\n+ `flash-attn`（用于基准测试）\n\n### 安装包\n\n对于基于 Triton 的 SageAttention V1（比 SageAttention V2\u002FV2++\u002FV3 慢），请参考 [SageAttention-1](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Ftree\u002Fsageattention-1) 分支，并通过 pip 安装：\n```\npip install sageattention==1.0.6\n```\n\n若要使用包含 SageAttention2++ 的 SageAttention 2.2.0，可通过 pip 安装：\n```\npip install sageattention==2.2.0 --no-build-isolation\n```\n\n**或者** 从源码编译：\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention.git\ncd SageAttention \nexport EXT_PARALLEL=4 NVCC_APPEND_FLAGS=\"--threads 8\" MAX_JOBS=32 # 可选\npython setup.py install\n```\n\n若要与 FlashAttention3 进行速度对比，请从源码编译 FlashAttention3：\n```\ngit clone https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention.git --recursive\ngit checkout b7d29fb3b79f0b78b1c369a52aaa6628dabfb0d7 # 2.7.2 版本\ncd hopper\npython setup.py install\n```\n\n## 使用方法\n```python\nfrom sageattention import sageattn\nattn_output = sageattn(q, k, v, tensor_layout=\"HND\", is_causal=False)\n```\n+ `q, k, v` 为 **FP16\u002FBF16** 类型，默认 `tensor_layout=\"HND\"` 时形状为 `(batch_size, head_num, seq_len, head_dim)`。若形状为 `(batch_size, seq_len, head_num, head_dim)`，请设置 `tensor_layout=\"NHD\"`。\n+ `is_causal` 控制是否使用因果掩码（causal mask）。\n\n### 可用 API：\n+ `sageattn`：根据 GPU 自动选择最优 kernel，在性能与精度之间取得良好平衡。\n+ `sageattn_qk_int8_pv_fp16_triton`：使用 Triton 后端，对 $QK^\\top$ 进行 INT8 量化（8 位整数量化），对 $PV$ 使用 FP16（半精度浮点）。\n+ `sageattn_qk_int8_pv_fp16_cuda`：使用 CUDA 后端，对 $QK^\\top$ 进行 INT8 量化，对 $PV$ 使用 FP16。\n+ `sageattn_qk_int8_pv_fp8_cuda`：使用 CUDA 后端，对 $QK^\\top$ 进行 INT8 量化，对 $PV$ 使用 FP8（8 位浮点）。（注意：设置 `pv_accum_dtype=fp32+fp16` 对应 SageAttention2++。）\n+ `sageattn_qk_int8_pv_fp8_cuda_sm90`：使用 CUDA 后端，对 $QK^\\top$ 进行 INT8 量化，对 $PV$ 使用 FP8，并针对 Hopper 架构 GPU 进行专门优化。\n+ `sageattn_varlen`：使用 Triton 后端，对 $QK^\\top$ 进行 INT8 量化，对 $PV$ 使用 FP16，支持同一批次内不同序列长度。\n\n为了在自定义设备和模型上获得最佳的速度与精度表现，我们强烈建议参考 [此文件](.\u002Fsageattention\u002Fcore.py) 获取详细指导。\n\n> **注意：**  \n支持 `q` 与 `k,v` 之间的不同序列长度以及 `分组查询注意力（group-query attention）`。\n\n### 即插即用示例\n\n我们可以轻松替换 `scaled_dot_product_attention`。  \n以 [CogvideoX](https:\u002F\u002Fhuggingface.co\u002Fzai-org\u002FCogVideoX-2b) 为例：\n\n添加以下代码并运行：\n```diff\nimport torch.nn.functional as F\n\n+ from sageattention import sageattn\n+ F.scaled_dot_product_attention = sageattn\n\n```\n\n具体操作如下：\n\n```bash\ncd example\npython cogvideox_infer.py --model cogvideox-2b --compile --attention_type sage\n```\n\n**你将得到一个无损视频，保存在** `.\u002Fexample\u002Fvideos\u002F\u003Cmodel>\u002F\u003Cattention_type>\u002F` **目录下，且速度比使用** `--attention_type sdpa` **更快。更多示例和指南请参见 `example\u002F` 目录。**\n\n> **注意：** 并非所有模型都适用于 `F.scaled_dot_product_attention = sageattn`。从技术上讲，你应该通过修改目标模型的 `Attention Class` 来替换原始注意力机制。对于图像和视频模型，我们建议仅替换 DiT 中的注意力模块（详见 `example\u002Fmodify_mochi.py`）。\n\n### Kernel 基准测试\n我们提供了一个基准测试脚本，用于比较 SageAttention、FlashAttention2 和 FlashAttention3 等不同 kernel 的速度。详情请参阅 `benchmark\u002F` 目录。\n\n## 性能\n### Kernel 速度\n\n`8+8` 表示对 $QK^\\top$ 使用 INT8 量化，对 $PV$ 使用 FP8 量化。`8+16` 表示对 $PV$ 使用 FP16 并搭配 FP16 累加器。\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_f90cfdcbae0f.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_48dea504ac39.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_a544baf0f646.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_d2d2f4158317.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_8c6fc63910ad.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_cc68282cbf17.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_681d995c55cf.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_e1cbd11421fa.png)\n\n> **注意：** TOPS 结果仅指注意力 Kernel 本身，不包括量化和平滑（smoothing）操作。\n\n### 端到端性能\n#### **端到端精度：**\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_eae3a7c560f4.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_178a5743fb2d.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_208767f9975e.png)\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_f0630bbf4909.png)\n\n#### **端到端加速比：**\n\n![Local Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_readme_bc796018f4e4.png)\n*注：SageAttention2++ 实现了更高的速度。*\n\n## 引用\n**如果你使用了本代码或认为我们的工作有价值，请引用以下论文：**\n```\n@inproceedings{zhang2025sageattention,\n  title={SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration}, \n  author={Zhang, Jintao and Wei, Jia and Zhang, Pengle and Zhu, Jun and Chen, Jianfei},\n  booktitle={International Conference on Learning Representations (ICLR)},\n  year={2025}\n}\n@inproceedings{zhang2024sageattention2,\n  title={Sageattention2: Efficient attention with thorough outlier smoothing and per-thread int4 quantization},\n  author={Zhang, Jintao and Huang, Haofeng and Zhang, Pengle and Wei, Jia and Zhu, Jun and Chen, Jianfei},\n  booktitle={International Conference on Machine Learning (ICML)},\n  year={2025}\n}\n@article{zhang2025sageattention2++,\n  title={Sageattention2++: A more efficient implementation of sageattention2},\n  author={Zhang, Jintao and Xu, Xiaoming and Wei, Jia and Huang, Haofeng and Zhang, Pengle and Xiang, Chendong and Zhu, Jun and Chen, Jianfei},\n  journal={arXiv preprint arXiv:2505.21136},\n  year={2025}\n}\n@article{zhang2025sageattention3,\n  title={SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training},\n  author={Zhang, Jintao and Wei, Jia and Zhang, Pengle and Xu, Xiaoming and Huang, Haofeng and Wang, Haoxu and Jiang, Kai and Zhu, Jun and Chen, Jianfei},\n  journal={arXiv preprint arXiv:2505.11594},\n  year={2025}\n}\n```","# SageAttention 快速上手指南\n\n## 环境准备\n\n### 系统与依赖要求\n- **Python** ≥ 3.9  \n- **PyTorch** ≥ 2.3.0  \n- **Triton** ≥ 3.0.0  \n- **CUDA 版本要求**（根据 GPU 架构）：\n  - Blackwell 或使用 SageAttention2++：≥ 12.8\n  - Ada 架构（如 RTX 4090）启用 FP8：≥ 12.4\n  - Hopper 架构（如 H100）启用 FP8：≥ 12.3\n  - Ampere 架构（如 A100、RTX 3090）：≥ 12.0\n\n> 建议使用国内 PyPI 镜像加速安装，例如：`pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 安装步骤\n\n### 推荐方式：pip 安装（含 SageAttention2++）\n```bash\npip install sageattention==2.2.0 --no-build-isolation\n```\n\n### 或从源码编译安装\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention.git\ncd SageAttention\nexport EXT_PARALLEL=4 NVCC_APPEND_FLAGS=\"--threads 8\" MAX_JOBS=32  # 可选，加速编译\npython setup.py install\n```\n\n> 注意：若仅需旧版 SageAttention V1（性能较低），请切换至 `sageattention-1` 分支并安装 `sageattention==1.0.6`。\n\n## 基本使用\n\n### 最简调用示例\n```python\nfrom sageattention import sageattn\n\n# q, k, v 为 FP16\u002FBF16 张量，形状 (batch_size, head_num, seq_len, head_dim)\nattn_output = sageattn(q, k, v, tensor_layout=\"HND\", is_causal=False)\n```\n\n### 参数说明\n- `tensor_layout`：\n  - `\"HND\"`：输入形状为 `(B, H, S, D)`\n  - `\"NHD\"`：输入形状为 `(B, S, H, D)`\n- `is_causal`：是否使用因果掩码（用于生成任务）\n\n### 即插即用替换（以 CogVideoX 为例）\n在模型推理脚本开头添加：\n```python\nimport torch.nn.functional as F\nfrom sageattention import sageattn\nF.scaled_dot_product_attention = sageattn\n```\n\n然后运行示例脚本：\n```bash\ncd example\npython cogvideox_infer.py --model cogvideox-2b --compile --attention_type sage\n```\n\n> 注意：并非所有模型都支持直接替换 `F.scaled_dot_product_attention`。对于图像\u002F视频生成模型（如 DiT 架构），建议手动替换模型中的 Attention 模块（参考 `example\u002Fmodify_mochi.py`）。","某AI创业公司正在部署其自研的多模态视频生成模型（基于CogVideoX架构），用于为电商平台实时生成商品宣传短视频，需在单台RTX 5090服务器上实现高吞吐、低延迟的推理服务。\n\n### 没有 SageAttention 时\n- 推理速度受限，使用FlashAttention2处理10秒高清视频需约8.2秒，难以满足实时生成需求。\n- 若改用FlashAttention3-FP8虽提速至3.1秒，但生成画面出现明显色偏和结构失真，客户投诉率上升。\n- 为保证画质只能维持FP16精度，GPU显存占用高，单卡并发路数被限制在4路，服务器成本压力大。\n- 尝试其他量化方案会导致注意力机制数值不稳定，需额外微调模型，开发周期延长2周以上。\n- 在Hopper架构新卡（如H100）上迁移时，性能提升不明显，无法充分利用新硬件优势。\n\n### 使用 SageAttention 后\n- 直接替换原有注意力模块，无需重新训练，在RTX 5090上推理耗时降至2.9秒，提速近3倍。\n- 生成视频质量与FP16基线几乎一致，PSNR和FID指标无显著下降，客户满意度回升。\n- 显存占用降低约35%，单卡并发能力提升至7路，同等业务量下服务器数量减少近一半。\n- 插件式集成，仅需几行代码修改，当天完成上线，节省大量工程调试时间。\n- 在H100上同样获得2.5倍加速，无缝适配新一代GPU，未来扩容更灵活。\n\nSageAttention以“即插即用”的方式，在不牺牲生成质量的前提下，显著提升多模态大模型的推理效率与硬件利用率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthu-ml_SageAttention_0c8bd676.png","thu-ml","TSAIL group","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fthu-ml_852ca511.jpg","Tsinghua Statistical Artificial Intelligence & Learning Group",null,"https:\u002F\u002Fml.cs.tsinghua.edu.cn","https:\u002F\u002Fgithub.com\u002Fthu-ml",[83,87,91,95,99],{"name":84,"color":85,"percentage":86},"Cuda","#3A4E3A",47.7,{"name":88,"color":89,"percentage":90},"Python","#3572A5",28.5,{"name":92,"color":93,"percentage":94},"C++","#f34b7d",22.1,{"name":96,"color":97,"percentage":98},"C","#555555",1.5,{"name":100,"color":101,"percentage":102},"Shell","#89e051",0.2,3273,389,"2026-04-04T11:57:43","Apache-2.0","Linux","必需 NVIDIA GPU，支持 Ampere（如 A100、RTX3090）、Ada（如 RTX4090、L40、L20）和 Hopper（如 H100、H20）架构，显存未明确说明但建议至少 8GB；CUDA 版本要求：>=12.8（Blackwell 或 SageAttention2++），>=12.4（Ada 上 FP8 支持），>=12.3（Hopper 上 FP8 支持），>=12.0（Ampere）","未说明",{"notes":111,"python":112,"dependencies":113},"仅支持 Linux 系统；不支持 macOS 和 Windows。安装时建议使用 --no-build-isolation 参数或从源码编译。部分功能（如 SageAttention2++）需要较新 CUDA 版本和 Blackwell 架构 GPU。推荐在支持 FP8 的 Ada 或 Hopper GPU 上使用以获得最佳性能。",">=3.9",[114,115,116],"torch>=2.3.0","triton>=3.0.0","flash-attn",[13,52,26],[119,120,121,122,123,124,125,126,127,128,129,130],"attention","inference-acceleration","llm","quantization","cuda","triton","video-generation","efficient-attention","mlsys","llm-infra","vit","video-generate",18,"2026-03-27T02:49:30.150509","2026-04-06T05:38:03.150560",[135,140,145,150,155],{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},470,"安装时提示 'Cannot find CUDA_HOME. CUDA must be available to build the package.' 怎么解决？","该错误表示系统找不到 CUDA 安装路径。建议直接使用预编译的 wheel 包，避免从源码构建。可前往 https:\u002F\u002Fgithub.com\u002Fwoct0rdho\u002FSageAttention\u002Freleases 下载对应版本的预编译文件进行安装。","https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Fissues\u002F110",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},471,"SageAttention 是否支持注意力偏置（attention bias）？","官方 SageAttention 目前不直接支持 attention bias。但有用户实现了带 bias 的 Flash Attention Triton 版本，可参考：https:\u002F\u002Fgithub.com\u002Fpengzhangzhi\u002FFlash-Attention-with-Bias-Triton。需要注意，加入 bias 会增加显存带宽消耗、占用寄存器并可能引发寄存器溢出，从而降低 kernel 性能。","https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Fissues\u002F111",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},472,"在 LLM 推理中替换为 SageAttention 后准确率大幅下降怎么办？","该问题曾被确认为 bug。维护者表示已在最新提交中修复（参见 PR #50）。请确保使用最新版本的 SageAttention，并检查是否正确应用了修复后的代码。若仍存在问题，可参考 Issue #141 中的讨论。","https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Fissues\u002F55",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},473,"运行时出现 AssertionError，提示 dtype 必须是 float16 或 bfloat16，是否不支持 float32？","是的，SageAttention 在某些版本（如 1.0.6）中强制要求输入张量的数据类型为 torch.float16 或 torch.bfloat16，不支持 torch.float32。请确保模型和输入数据使用兼容的精度格式，或升级\u002F降级到支持所需 dtype 的版本。","https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Fissues\u002F42",{"id":156,"question_zh":157,"answer_zh":158,"source_url":159},474,"SageAttention 是否兼容 PyTorch 2.9.0.dev20250813+cu128 版本？","目前存在兼容性问题。该错误源于 PyTorch 开发版的一个已知 bug（见 pytorch\u002Fpytorch#148317），而非 SageAttention 本身的问题。建议暂时不要使用该 PyTorch 开发版本，或等待官方修复后再尝试安装。","https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention\u002Fissues\u002F242",[161],{"id":162,"version":163,"summary_zh":164,"released_at":165},100119,"v2.0.1","SageAttention v2.0.1","2025-01-28T06:25:52"]