[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-xlite-dev--LeetCUDA":3,"tool-xlite-dev--LeetCUDA":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160411,2,"2026-04-18T23:33:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":77,"languages":78,"stars":102,"forks":103,"last_commit_at":104,"license":105,"difficulty_score":106,"env_os":107,"env_gpu":108,"env_ram":107,"env_deps":109,"category_tags":116,"github_topics":117,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":130,"updated_at":131,"faqs":132,"releases":168},9356,"xlite-dev\u002FLeetCUDA","LeetCUDA","📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉","LeetCUDA 是一套专为初学者打造的现代 CUDA 学习笔记与实战代码库，旨在通过 PyTorch 环境帮助用户轻松掌握 GPU 编程核心技能。它主要解决了开发者在学习底层高性能计算时面临的门槛高、资料分散以及缺乏高质量参考实现的痛点。\n\n这套资源非常适合希望深入理解深度学习底层原理的 AI 研究人员、系统工程师以及想要提升模型推理速度的开发者。无论是刚接触 CUDA 的新手，还是寻求优化现有算子的资深专家，都能从中获益。\n\nLeetCUDA 的技术亮点十分突出：它不仅收录了超过 200 个实用的 CUDA 内核示例，还深入解析了 Tensor Cores、HGEMM（半精度通用矩阵乘法）以及 FlashAttention-2 等前沿技术。其中，其手写的 HGEMM 实现能发挥出媲美 cuBLAS 库 98% 至 100% 的峰值性能，而基于纯 MMA PTX 指令优化的 FlashAttention 则展示了极致的显存访问效率。此外，项目还涵盖了 TF32、F16、BF16 等多种精度格式的实战应用，并配套了百余篇相关技术博客。通过提供从理论到落地的完整路径，LeetCUDA 成","LeetCUDA 是一套专为初学者打造的现代 CUDA 学习笔记与实战代码库，旨在通过 PyTorch 环境帮助用户轻松掌握 GPU 编程核心技能。它主要解决了开发者在学习底层高性能计算时面临的门槛高、资料分散以及缺乏高质量参考实现的痛点。\n\n这套资源非常适合希望深入理解深度学习底层原理的 AI 研究人员、系统工程师以及想要提升模型推理速度的开发者。无论是刚接触 CUDA 的新手，还是寻求优化现有算子的资深专家，都能从中获益。\n\nLeetCUDA 的技术亮点十分突出：它不仅收录了超过 200 个实用的 CUDA 内核示例，还深入解析了 Tensor Cores、HGEMM（半精度通用矩阵乘法）以及 FlashAttention-2 等前沿技术。其中，其手写的 HGEMM 实现能发挥出媲美 cuBLAS 库 98% 至 100% 的峰值性能，而基于纯 MMA PTX 指令优化的 FlashAttention 则展示了极致的显存访问效率。此外，项目还涵盖了 TF32、F16、BF16 等多种精度格式的实战应用，并配套了百余篇相关技术博客。通过提供从理论到落地的完整路径，LeetCUDA 成为了连接深度学习算法与硬件加速之间的桥梁。","\u003Cdiv align=\"center\">\n  \u003Cp align=\"center\">\n    \u003Ch2>📚 LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners 🐑\u003C\u002Fh2>\n    \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_cae076e970b2.png' >\n  \u003C\u002Fp>\n  \u003Cdiv align='center'>\n      \u003Cimg src=https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLanguage-CUDA-brightgreen.svg >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fxlite-dev\u002FLeetCUDA.svg?style=dark >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fxlite-dev\u002FLeetCUDA.svg?style=dark >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-GPLv3.0-turquoise.svg >\n      \u003Ca href=\"https:\u002F\u002Fhellogithub.com\u002Frepository\u002F98348655a96640ca8ddcbc298edc901d\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fapi.hellogithub.com\u002Fv1\u002Fwidgets\u002Frecommend.svg?rid=98348655a96640ca8ddcbc298edc901d&claim_uid=ofSCbzTmdeQk3FD&theme=small\" alt=\"Featured｜HelloGitHub\" \u002F>\u003C\u002Fa>\n  \u003C\u002Fdiv>\n\u003C\u002Fdiv>\n\n📚 **LeetCUDA**: It includes **Tensor\u002FCUDA Cores, TF32\u002FF16\u002FBF16\u002FF8**, [📖200+ CUDA Kernels🔥](#cuda-kernel) with PyTorch, [📖100+ LLM\u002FCUDA🔥](#my-blogs-part-1) blogs, [📖HGEMM⚡️](.\u002Fkernels\u002Fhgemm) which can achieve `98%~100%` TFLOPS of **cuBLAS**, and [📖flash-attn⚡️](.\u002Fkernels\u002Fflash-attn) using Tensor Cores with pure MMA PTX. ♥️ Please consider to leave a ⭐️ Star to support me, my bro ~ ♥️\n\n\u003Cdiv align=\"center\">\n  \u003Cp align=\"center\">\n    \u003Ca href=\"#contribute\">🔥🔥 PR Welcome: Add Your Kernel to LeetCUDA! Let's make it Awesome together! 🎉🎉\u003C\u002Fa> \u003Cbr>\n    \u003Ca href=https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fgraphs\u002Fcontributors > \u003Cimg src=https:\u002F\u002Fopencollective.com\u002Fleetcuda\u002Fcontributors.svg height=40px > \u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n## ©️Citations🎉🎉\n\n```BibTeX\n@misc{LeetCUDA@2025,\n  title={LeetCUDA: A Modern CUDA Learn Notes with PyTorch for Beginners},\n  url={https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA.git},\n  note={Open-source software available at https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA.git},\n  author={DefTruth and Many Others},\n  year={2025}\n}\n```\n\n\n## 📖 News 🔥🔥\n\u003Cdiv id=\"news\">\u003C\u002Fdiv>\n\n- [2026\u002F03] Cache-DiT **[🎉v1.3.0](https:\u002F\u002Fgithub.com\u002Fvipshop\u002Fcache-dit)** release is ready, the major updates including: [Ring](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FCONTEXT_PARALLEL) Attention w\u002F [batched P2P](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FCONTEXT_PARALLEL), [USP](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FCONTEXT_PARALLEL\u002F) (Hybrid Ring and Ulysses), Hybrid 2D and 3D Parallelism (💥[USP + TP](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FHYBRID_PARALLEL\u002F)),  VAE-P Comm overhead reduce.\n\n![arch](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_de7e08501500.png)\n\n- [2026\u002F04]: **[🤖ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn.git)** is released! Yet another Faster Flash Prefill Attention with O(1)🎉SRAM complexity for large headdim, **1.8x~3x↑**🎉 vs SDPA EA: [📈L20 ~1.9x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-l20), [📈A30 ~1.8x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-a30),[📈4090 ~2.1x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-4090). Currently, FFPA supports self-attention, cross-attention, grouped\u002Fmulti-query attention, causal attention with large headdim (D=320~1024). While the standard FlashAttention-2 only support headdim \u003C= 256.\n\n\u003Cdiv align='center'>\n\u003Cimg height=\"320px\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_c22d74ae7cfd.png\" \u002F>\n\u003C\u002Fdiv>\n\n- [2024\u002F12]: **[⚡️HGEMM](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FHGEMM.git)** is released! Write HGEMM from scratch using Tensor Cores with **WMMA, MMA and CuTe** API, achieve peak🎉 performance.\n\n## 📖 Contents\n\u003Cdiv id=\"contents\">\u003C\u002Fdiv>\n\n- [📖 HGEMM-MMA 🎉🎉](#HGEMM-bench)\n- [📖 FlashAttention-MMA 🎉🎉](#fa-mma-bench)\n  - [📚 Split KV (Basic, FA-1)](#mma-split-kv)\n  - [📚 Split Q (Faster, FA-2)](#mma-split-q)\n  - [📚 Split Q + Shared KV](#mma-share-kv)\n  - [📚 Split Q + Shared QKV](#mma-share-qkv)\n  - [📚 Split Q + QK Tiling](#mma-tiling-qk)\n  - [📚 Split Q + QKV Tiling](#mma-tiling-qkv)\n- [📖 200+ CUDA Kernels 🔥🔥](#cuda-kernel)\n  - [📚 Easy ⭐️](#cuda-kernel-easy-medium)\n  - [📚 Medium ⭐️⭐️](#cuda-kernel-easy-medium)\n  - [📚 Hard ⭐️⭐️⭐️](#cuda-kernel-hard)\n  - [📚 Hard+ ⭐️⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n  - [📚 Hard++ ⭐⭐⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n  - [📚 Triton ⭐⭐⭐️](#triton-kernel)\n  - [📚 CUTLASS ⭐⭐⭐️](#cutlass-kernel)\n- [📖 100+ LLM\u002FCUDA Blogs 🔥](#my-blogs-part-1)\n- [📖 How to Contribute 👀👇](#contribute)\n\n\n## 📖 HGEMM Benchmark 🎉🎉\n\n\u003Cdiv id=\"HGEMM-bench\">\u003C\u002Fdiv>\n\nCurrently, on NVIDIA L20, RTX 4090 and RTX 3080 Laptop, compared with cuBLAS's default Tensor Cores algorithm, the `HGEMM (WMMA\u002FMMA\u002FCuTe)` in this repo (`blue`🔵) can achieve `98%~100%` of its (`orange`🟠) performance. Please check [toy-hgemm library⚡️⚡️](.\u002Fkernels\u002Fhgemm) or [HGEMM⚡️⚡️](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FHGEMM) repo for more details.\n\n![toy-hgemm-library](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_d369fb27d66b.png)\n\n|📚Feature |📚Feature |📚Feature |📚Feature|\n|:---:|:---:|:---:|:---:|\n|✔️CUDA\u002F**Tensor Cores**|✔️Loop over K|✔️Tile Block(BMxBK)|✔️Tile Threads(T 8x8)|\n|✔️WMMA(m16n16k16)|✔️MMA(m16n8k16)|✔️Pack LDST(128 bits)|✔️SMEM Padding|\n|✔️Copy Async|✔️Tile MMAs|✔️Tile Warps|✔️**Multi Stages(2~4)**|\n|✔️Register Double Buffers|✔️**Block Swizzle**|✔️**Warp Swizzle**|✔️**SMEM Swizzle**(CuTe\u002FMMA)|\n|✔️Collective Store(Shfl)|✔️Layout NN|✔️Layout TN|✔️SGEMM FP32\u002FTF32|\n\n## 📖 FA2-MMA Benchmark 🎉🎉\n\n\u003Cdiv id=\"fa-mma-bench\">\u003C\u002Fdiv>\n\nI have also implemented **FlashAttention-2** using pure MMA PTX instructions, which supports features such as Multi-Stages, Tile MMA, Tile Warp, Shared KV SMEM, **Fully Shared QKV SMEM**, **Prefetch Q s2r**, **Prefetch K\u002FV g2s**, **QKV Fine-grained Tiling**, Collective Store, etc. Please refer to [flash-attn⚡️⚡️](.\u002Fkernels\u002Fflash-attn) for more details.\n\n![flash-attn-mma](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_25f231f22aea.png)\n\n|📚Feature |📚Feature |📚Feature |📚Feature|\n|:---:|:---:|:---:|:---:|\n|✔️Tensor Cores|✔️Loop over N\u002FD |✔️Tile Block(Br, Bc)|✔️MMA(m16n8k16)|\n|✔️Pack LDST(128 bits)|✔️SMEM **Swizzle**\u002FPadding |✔️Copy Async|✔️Tile MMAs|\n|✔️Tile Warps|✔️Multi Stages(1\u002F2)|✔️Collective Store(Shfl)|✔️**Split KV\u002FQ**|\n|✔️**Shared QKV** SMEM|✔️**Prefetch Q** s2r|✔️**Prefetch KV** g2s|✔️**QKV Fine-grained Tiling**|\n\nCurrently, for small-scale attention `(B\u003C=4, H \u003C=48, SeqLen \u003C= 8192, D \u003C= 64)` it can run faster than FA2\u002FSDPA on some Devices. For example, on NVIDIA RTX 3080 Laptop, [📚 Split Q + Fully Shared QKV SMEM](#mma-share-qkv) method can achieve **55 TFLOPS (D=64)** that almost **~1.5x** 🎉 faster than FA2. On NVIDIA L20, 🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn) method can achieve **104 TFLOPS (D=512)** that almost **~1.8x** 🎉 faster than SDPA (EFFICIENT ATTENTION). However, for large-scale attention, there remains a performance gap. Stay tuned for updates ~ (MMA Acc F16\u002FF32, softmax Acc F32 vs FA2 MMA\u002Fsoftmax Acc F32, 👇Benchmark)\n\n|Algorithm| (B,H,N,D) | RTX 3080 Laptop | L20 | RTX 4090 |\n|:---:|:---:|:---:|:---:|:---:|\n|FlashAttention-2|(1,8,8192,64)|37 TFLOPS|100 TFLOPS|145 TFLOPS|\n|share-qkv+stage2|(1,8,8192,64)|**55 TFLOPS**|99 TFLOPS|**221 TFLOPS**|\n|FlashAttention-2|(1,48,8192,64)|37 TFLOPS|109 TFLOPS|163 TFLOPS|\n|share-qkv+stage2|(1,48,8192,64)|**48 TFLOPS**|107 TFLOPS|**224 TFLOPS**|\n|SDPA(EFFICIENT ATTENTION)|(1,48,8192,512)|16 TFLOPS|58 TFLOPS|85 TFLOPS|\n|🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|(1,48,8192,512)|**39 TFLOPS**|**104 TFLOPS**|**200 TFLOPS**|\n|Precision Errors vs FA2\u002FSDPA| \u002F | max: \u003C ~1e-3 | min: ~0.0 | mean: \u003C ~1e-5 |\n\nThe `Split KV` and `Split Q` implementations have been carried out in [flash-attn⚡️⚡️](.\u002Fkernels\u002Fflash-attn) for performance comparison. The `Split KV` method, which involves splitting all QKV across MMA (Warps), is slower than `Split Q` method, which splitting Q across MMA(Warps) and keep access KV for all MMA(Warps).\n\n- 📚 Split KV (Basic, FlashAttention-1)\n\u003Cdiv id=\"mma-split-kv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F Split QKV across MMA(Warps) using naive matmul MMA&Warp tiling policy.\n\u002F\u002F case: The layout of 8 MMA(2x4)  [after] kWarpTileSeqLenQxkWarpTileSeqLenK(2x2) -> 32x2,32x2=64x64:\n\u002F\u002F |  [64,64]  |    warp_KV 0    |    warp_KV 1    |    warp_KV 2    |    warp_KV 3    |\n\u002F\u002F | warp_QP 0 |-- MMA 0,MMA 0 --|-- MMA 2,MMA 2 --|-- MMA 4,MMA 4 --|-- MMA 6,MMA 6 --|\n\u002F\u002F | warp_QP 0 |-- MMA 0,MMA 0 --|-- MMA 2,MMA 2 --|-- MMA 4,MMA 4 --|-- MMA 6,MMA 6 --|\n\u002F\u002F | warp_QP 1 |-- MMA 1,MMA 1 --|-- MMA 3,MMA 2 --|-- MMA 5,MMA 5 --|-- MMA 7,MMA 7 --|\n\u002F\u002F | warp_QP 1 |-- MMA 1,MMA 1 --|-- MMA 3,MMA 2 --|-- MMA 5,MMA 5 --|-- MMA 7,MMA 7 --|\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_kv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q (Faster, FlashAttention-2)\n\u003Cdiv id=\"mma-split-q\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F Split Q across MMA(Warps) and keep access KV for all MMA(Warps),\n\u002F\u002F in order to reduce the comm between warps via smem and warp shuffle.\n\u002F\u002F case: MMA = m16n8k16, Br=16x4=64, Bc=8x8=64, layout: 4 warps\n\u002F\u002F |   64x64   |      warp_KV 0       |\n\u002F\u002F | warp_QP 0 | MMA 0 ... MMA 0 (x8) |\n\u002F\u002F | warp_QP 1 | MMA 1 ... MMA 1 (x8) |\n\u002F\u002F | warp_QP 2 | MMA 2 ... MMA 2 (x8) |\n\u002F\u002F | warp_QP 3 | MMA 3 ... MMA 3 (x8) |\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + Shared KV SMEM (**1\u002F2 SRAM** vs FA2)\n\u003Cdiv id=\"mma-share-kv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F K, V shared the same shared memory, improve block occupancy.\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_shared_kv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + Fully Shared QKV SMEM (**1\u002F4 SRAM** vs FA2)\n\n\u003Cdiv id=\"mma-share-qkv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F Q, K, V fully shared the same shared memory and prefetch Q s2r, improve block occupancy\n\u002F\u002F and reduce Q SMEM IO-Access.\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_shared_qkv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + QK Fine-grained Tiling (**O(16xd) SRAM** vs FA2 **O(4xBrxd) SRAM**, `Headdim -> 1024`)\n\n\u003Cdiv id=\"mma-tiling-qk\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F Fine-grained tiling at the MMA level for Q@K^T results in a constant SRAM usage of\n\u002F\u002F 64 * kMmaAtomK for Q and K. For V, the SRAM complexity is O(kMmaAtomK * d), leading to\n\u002F\u002F an overall SRAM complexity of O(kMmaAtomK * d). Consequently, this approach allows us to\n\u002F\u002F extend D (head dimension) up to 1024.\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_tiling_qk_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + Fully QKV Fine-grained Tiling (**O(2xBrx16)~O(1) SRAM** vs FA2 **O(4xBrxd) SRAM**)\n\n\u003Cdiv id=\"mma-tiling-qkv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F Fine-grained tiling at the MMA level for all Q@K^T and P@V results in a constant SRAM usage of\n\u002F\u002F Br * 16 or Bc * 16 for Q, K, V, leading to an overall SRAM complexity of O(Br * 16). Consequently,\n\u002F\u002F this approach allows us to run faster than SDPA w or w\u002Fo MMA Acc F32.\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_tiling_qkv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n💡NOTE: [📚Split Q + Fully QKV Fine-grained Tiling](#mma-tiling-qkv) has been refactored into 🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn).\n\n## 📖 200+ CUDA Kernels 🔥🔥 (Easy -> Hard++) ([©️back👆🏻](#contents))\n\n\u003Cdiv id=\"cuda-kernel\">\u003C\u002Fdiv>\n\nThe kernels listed here will guide you through a step-by-step progression, ranging from easy to very challenging topics. The **workflow** for each topic will be as follows: custom **CUDA kernel** implementation -> PyTorch **Python bindings** -> Run tests. 👉TIPS: `*` = Tensor Cores (WMMA, MMA, CuTe), otherwise, CUDA Cores; `\u002F` = not supported; `✔️` = supported; `❔` = TODO. Contents are listed as follows:\n\n- [📚 Easy ⭐️](#cuda-kernel-easy-medium)\n- [📚 Medium ⭐️⭐️](#cuda-kernel-easy-medium)\n- [📚 Hard ⭐️⭐️⭐️](#cuda-kernel-hard)\n- [📚 Hard+ ⭐️⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n- [📚 Hard++ ⭐⭐⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n- [📚 Triton ⭐⭐⭐️](#triton-kernel)\n- [📚 CUTLASS ⭐⭐⭐️](#cutlass-kernel)\n\n[📚 Easy](#cuda-kernel-easy-medium) and [📚 Medium](#cuda-kernel-easy-medium) sections cover operations such as `element-wise, mat_trans, warp\u002Fblock reduce, nms, relu, gelu, swish, layer-norm, rms-norm, online-softmax, dot-prod, embedding` and basic usage for `FP32`, `FP16`, `BF16` and `FP8` . [📚 Hard](#cuda-kernel-hard), [📚 Hard+](#cuda-kernel-hard-plus) and [📚 Hard++](#cuda-kernel-hard-plus) sections delve deeper into advanced topics, primarily focusing on operations like `sgemv, sgemm, hgemv, hgemm and flash-attention`. These sections also provide numerous kernels implemented using Tensor Cores with pure MMA PTX.\n\n### 📚 Easy ⭐️ & Medium ⭐️⭐️  ([©️back👆🏻](#cuda-kernel))\n\u003Cdiv id=\"cuda-kernel-easy-medium\">\u003C\u002Fdiv>\n\n|📖 CUDA Kernel| 📖 Elem DType| 📖 Acc DType| 📖 Docs | 📖 Level |\n|:---|:---|:---|:---|:---|\n| ✔️ [elementwise_f32](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f32x4](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16x2](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16x8](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16x8_pack](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️⭐️|\n| ✔️ [histogram_i32](.\u002Fkernels\u002Fhistogram\u002Fhistogram.cu)|i32|\u002F|[link](.\u002Fkernels\u002Fhistogram\u002F)|⭐️|\n| ✔️ [histogram_i32x4](.\u002Fkernels\u002Fhistogram\u002Fhistogram.cu)|i32|\u002F|[link](.\u002Fkernels\u002Fhistogram\u002F)|⭐️|\n| ✔️ [sigmoid_f32](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f32x4](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16x2](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16x8](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16x8_pack](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️⭐️|\n| ✔️ [relu_f32](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f32x4](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16x2](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16x8](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16x8_pack](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️⭐️|\n| ✔️ [elu_f32](.\u002Fkernels\u002Felu\u002Felu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f32x4](.\u002Fkernels\u002Felu\u002Felu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16x2](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16x8](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16x8_pack](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️⭐️|\n| ✔️ [gelu_f32](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f32x4](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16x2](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16x8](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16x8_pack](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️⭐️|\n| ✔️ [swish_f32](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f32x4](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16x2](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16x8](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16x8_pack](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️⭐️|\n| ✔️ [hardswish_f32](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f32x4](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16x2](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16x8](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16x8_pack](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️⭐️|\n| ✔️ [hardshrink_f32](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f32x4](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16x2](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16x8](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16x8_pack](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️⭐️|\n| ✔️ [embedding_f32](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f32x4](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f32x4_pack](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16x2](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16x8](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16x8_pack](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_f32_col2row{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️|\n| ✔️ [mat_trans_f32_row2col{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️|\n| ✔️ [mat_trans_f32_diagonal2d](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_f32x4_col2row{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_f32x4_row2col{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_cute](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose_cute.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [warp_reduce_{all}](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|all|all|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f32_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f32|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f32x4_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f32|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x2_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x2_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x8_pack_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x8_pack_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16_bf16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|bf16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x2_bf16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|bf16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x2_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x8_pack_bf16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|bf16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x8_pack_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e4m3_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e4m3|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e5m2_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e5m2|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e4m3x16_pack_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e4m3|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e5m2x16_pack_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e5m2|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_i8_i32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|i8|i32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_i8x16_pack_i32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|i8|i32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f32|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f32x4](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f32|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f16_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f16|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f16x2_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f16|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f16x8_pack_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f16|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [softmax_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [softmax_f32x4_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f32x4_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f16_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f16|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f16x2_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f16|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f16x8_pack_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f16|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [online_safe_softmax_f32_per_token](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [online_safe_softmax_f32x4_pack_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [rope_f32](.\u002Fkernels\u002Frope\u002Frope.cu)|f32|f32|[link](.\u002Fkernels\u002Frope\u002F)|⭐️⭐️|\n| ✔️ [rope_f32x4_pack](.\u002Fkernels\u002Frope\u002Frope.cu)|f32|f32|[link](.\u002Fkernels\u002Frope\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f32](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f32x4](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x2_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x8_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x8_pack_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x8_pack_f32](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16_f32](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f32x4](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x2_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_pack_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_pack_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [nms_f32](.\u002Fkernels\u002Fnms\u002Fnms.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fnms)|⭐️⭐️|\n| ✔️ [merge_attn_states](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states\u002Fcuda_merge_attn_states.cu)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states)|⭐️⭐️|\n| ✔️ [notes v1(deprecated)](.\u002Fkernels\u002Fnotes-v1.cu)|f32|f32|\u002F|⭐️⭐️|\n| ✔️ [How to use nsys\u002Fncu(timeline\u002Fptx\u002Fsass)](.\u002Fkernels\u002Fnvidia-nsight\u002F)|\u002F|\u002F|[link](.\u002Fkernels\u002Fnvidia-nsight\u002F)|⭐️⭐️|\n\n### 📚 Hard ⭐⭐⭐️ ([©️back👆🏻](#cuda-kernel))\n\n\u003Cdiv id=\"cuda-kernel-hard\">\u003C\u002Fdiv>\n\n|📖 CUDA Kernel| 📖 Elem DType| 📖 Acc DType| 📖 Docs | 📖 Level |\n|:---|:---|:---|:---|:---|\n| ✔️ [sgemv_k32_f32](.\u002Fkernels\u002Fsgemv\u002Fsgemv.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemv_k128_f32x4](.\u002Fkernels\u002Fsgemv\u002Fsgemv.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemv_k16_f32](.\u002Fkernels\u002Fsgemv\u002Fsgemv.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_k32_f16](.\u002Fkernels\u002Fhgemv\u002Fhgemv.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_k128_f16x4](.\u002Fkernels\u002Fhgemv\u002Fhgemv.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_k16_f16](.\u002Fkernels\u002Fhgemv\u002Fhgemv.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_naive_f32](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️|\n| ✔️ [sgemm_sliced_k_f32](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k_f32x4](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k...bcf](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k...dbuf](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k16...dbuf](.\u002Fkernels\u002Fsgemm\u002Fsgemm_async.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k16...async](.\u002Fkernels\u002Fsgemm\u002Fsgemm_async.cu)|f32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_wmma_m16n16k8...stages*](.\u002Fkernels\u002Fsgemm\u002Fsgemm_wmma_tf32_stage.cu)|tf32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_wmma_m16n16k8...swizzle*](.\u002Fkernels\u002Fsgemm\u002Fsgemm_wmma_tf32_stage.cu)|tf32|f32|[link](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_naive_f16](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️|\n| ✔️ [hgemm_sliced_k_f16](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k_f16x4](.\u002Fkernels\u002Fhgemm\u002Fhgemm.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k_f16x4_pack](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k_f16x8_pack](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k...dbuf](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8\u002F16x8...k16\u002F32...dbuf](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm_async.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8\u002F16x8...k16\u002F32...async](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm_async.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...naive*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...mma4x2*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...mma4x4*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...dbuf*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m32n8k16....dbuf*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...stages*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma_stage.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...swizzle*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma_stage.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...naive*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...mma2x4*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...stages*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma_stage.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...swizzle*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma_stage.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...swizzle{smem}*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fswizzle\u002Fhgemm_mma_stage_swizzle.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...swizzle{tn}{smem}*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fswizzle\u002Fhgemm_mma_stage_tn_swizzle_x4.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_stages_swizzle{smem}...cute*](.\u002Fkernels\u002Fhgemm\u002Fcutlass\u002Fhgemm_mma_stage_tn_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_cublas*](.\u002Fkernels\u002Fhgemm\u002Fcublas\u002Fhgemm_cublas.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️|\n| ✔️ [hgemm_wgmma_m64n128k16...tma{ws}{tn}*](.\u002Fkernels\u002Fhgemm\u002Fwgmma\u002Fhgemm_wgmma_fp16acc_stages_tn.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wgmma_m64n128k16_fp32...tma*](.\u002Fkernels\u002Fhgemm\u002Fwgmma\u002Fhgemm_wgmma_fp32acc_stages_tn.cu)|f16|f32|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n\n### 📚 Hard+ ⭐️⭐️⭐️⭐️ & Hard++ ⭐️⭐️⭐️⭐️⭐️ ([©️back👆🏻](#cuda-kernel))\n\n- 📚 FlashAttention-2 MMA (MMA Acc F32\u002FF16, swizzle, QKV smem share, fine-grained tiling, etc.🎉)\n\n\u003Cdiv id=\"cuda-kernel-hard-plus\">\u003C\u002Fdiv>\n\n|📖 CUDA Kernel| 📖 Elem DType| 📖 Acc DType| 📖 Docs | 📖 Level |\n|:---|:---|:---|:---|:---|\n| ✔️ [flash_attn_cute(naive)](.\u002Fkernels\u002Fflash-attn\u002Fcutlass\u002Fflash_attn_cute.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn\u002F)|⭐️⭐️⭐️|\n| ✔️ [How to implement MMA smem swizzle*](.\u002Fkernels\u002Fswizzle\u002Fmma_simple_swizzle.cu)|f16|f16|[link](.\u002Fkernels\u002Fswizzle)|⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages_split_kv*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_split_kv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages_split_q*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_split_q.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...shared_kv*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_kv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...shared_qkv*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_qkv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...tiling_qk*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qk.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...tiling_qkv*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qkv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...shared_kv{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_kv_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...shared_qkv{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_qkv_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...tiling_qk{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qk_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma_stages...tiling_qkv{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qkv_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_kv{f32}{rr}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fothers\u002Fflash_attn_mma_share_kv_F32F16F16F32_rr.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_qkv{f32}{rr}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fothers\u002Fflash_attn_mma_share_qkv_F32F16F16F32_rr.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_kv_swizzle{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_kv_swizzle_q.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_kv_swizzle{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_kv_swizzle_qk.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_kv_swizzle{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_kv_swizzle_qkv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_qkv_swizzle{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_qkv_swizzle_q.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_qkv_swizzle{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_qkv_swizzle_qk.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...shared_qkv_swizzle{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_qkv_swizzle_qkv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...tiling_qk_swizzle{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qk_swizzle_q.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...tiling_qk_swizzle{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qk_swizzle_qk.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...tiling_qk_swizzle{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qk_swizzle_qkv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...tiling_qkv_swizzle{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_q.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...tiling_qkv_swizzle{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qk.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn_mma...tiling_qkv_swizzle{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qkv.cu)|f16|f16|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn...tiling_qkv_swizzle{q}{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_q_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn...tiling_qkv_swizzle{qk}{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qk_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [flash_attn...tiling_qkv_swizzle{qkv}{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qkv_F32F16F16F32.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn)|⭐️⭐️⭐️⭐️|\n\n💡NOTE: **rr**: means reduce registers usage (for `d>128`); **f32**: means MMA accumulate with FP32 dtype, otherwise, FP16. softmax Acc dtype is always be FP32 for high precision; **swizzle**: now, only support smem swizzle for MMA.\n\n- 📚 FFPA Attention MMA (**1.8x~3x**🎉faster vs SDPA EA, D > 256, FA2 not supported)\n\n|📖 CUDA Kernel| 📖 Elem DType| 📖 Acc DType| 📖 Docs | 📖 Level |\n|:---|:---|:---|:---|:---|\n| ✔️ [ffpa_mma_stages_split_q_L1_F16F16F16](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F16_L1.cu)|f16|f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [ffpa_mma_stages_split_q_L1_F16F16F32](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L1.cu)|f16|f32|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [ffpa_mma_stages_split_q_L1_mixed_acc](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L1.cu)|f16|QK f32, PV f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L2_F16F16F16](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F16_L2.cu)|f16|f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L2_F16F16F32](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L2.cu)|f16|f32|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L2_mixed_acc](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L2.cu)|f16|QK f32, PV f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L3_F16F16F16](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F16_L3.cu)|f16|f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L3_F16F16F32](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L3.cu)|f16|f32|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L3_mixed_acc](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L3.cu)|f16|QK f32, PV f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n\n💡NOTE: 🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn): 📚FFPA - Yet another Faster Flash Prefill Attention with O(1)🎉SRAM complexity for headdim > 256, **1.8x~3x**🎉faster than SDPA EA: [📈L20 ~1.9x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-l20), [📈 A30 ~1.8x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-a30), [📈3080 ~2.9x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-3080), [📈4090 ~2.1x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-4090).\n\n### 📚 Triton Kernel (OpenAI Triton) ⭐️⭐️⭐️ ([©️back👆🏻](#cuda-kernel))\n\n\u003Cdiv id=\"triton-kernel\">\u003C\u002Fdiv>\n\n|📖 Triton Kernel| 📖 Elem DType| 📖 Acc DType| 📖 Docs | 📖 Level |\n|:---|:---|:---|:---|:---|\n| ✔️ [triton_vector_add_kernel](.\u002Fkernels\u002Fopenai-triton\u002Fvector-add\u002F)|all|all|[link](.\u002Fkernels\u002Fopenai-triton\u002Fvector-add\u002F)|⭐️⭐️|\n| ✔️ [triton_fused_softmax(multi-stages)](.\u002Fkernels\u002Fopenai-triton\u002Ffused-softmax\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Ffused-softmax\u002F)|⭐️⭐️⭐️|\n| ✔️ [triton_fused_layer_norm(forward-pass)](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|⭐️⭐️⭐️|\n| ✔️ [triton_fused_layer_norm(backward-pass)](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|⭐️⭐️⭐️|\n| ✔️ [triton_merge_attn_states_kernel(w\u002F CUDA)](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states\u002F)|⭐️⭐️⭐️|\n\n### 📚 CUTLASS\u002FCuTe Kernel ⭐️⭐️⭐️ ([©️back👆🏻](#cuda-kernel))\n\n\u003Cdiv id=\"cutlass-kernel\">\u003C\u002Fdiv>\n\n|📖 CUTLASS\u002FCuTe Kernel| 📖 Elem DType| 📖 Acc DType| 📖 Docs | 📖 Level |\n|:---|:---|:---|:---|:---|\n| ✔️ [mat_transpose_cute](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose_cute.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [flash_attn_cute(naive)](.\u002Fkernels\u002Fflash-attn\u002Fcutlass\u002Fflash_attn_cute.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_f16_cute_kernel](.\u002Fkernels\u002Fhgemv\u002Fhgemv_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_f16x8_cute_kernel](.\u002Fkernels\u002Fhgemv\u002Fhgemv_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_tensor_core_cute_kernel](.\u002Fkernels\u002Fhgemv\u002Fhgemv_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_stages_swizzle{smem}...cute*](.\u002Fkernels\u002Fhgemm\u002Fcutlass\u002Fhgemm_mma_stage_tn_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [ws_hgemm_naive_cute_kernel](.\u002Fkernels\u002Fws-hgemm\u002Fnaive_ws_hgemm_sm8x.cu)|f16|f16|[link](.\u002Fkernels\u002Fws-hgemm\u002F)|⭐️⭐️⭐️|\n\n## 📖 100+ 高性能计算与分布式-技术博客\n\n\u003Cdiv id=\"my-blogs-part-1\">\u003C\u002Fdiv>\n\n### 📚 高性能计算与分布式-个人技术专栏 ([©️back👆🏻](#contents))\n\n|📖 类型-标题|📖 作者| 📖 推荐 |\n|:---|:---|:---|\n| [[Diffusion推理]📖简短的2025年总结，写在Cache-DiT v1.2.1之际](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2001692370358539662)|@DefTruth|⭐️⭐️|\n| [[Diffusion推理]📖CacheDiT支持Z-Image分布式推理和缓存加速​​](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1978490962742374735)|@DefTruth|⭐️⭐️|\n| [[Diffusion推理]📖cache-dit支持FLUX.2分布式推理和Cache](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1977698505834379041)|@DefTruth|⭐️⭐️|\n| [[Diffusion推理]📖Cache加速-FoCa公式理解记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1952056591068144338)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖cache-dit: BlockAdapter支持HunyuanImage-2.1 Cache加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1950849526400263083)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖cache-dit + Qwen-Image-Lightning 实现 3.5 steps 推理!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1948696529180295613)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖cache-dit: Wan2.2-MoE 2.4x 推理加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1943976514321380955)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖cache-dit: Qwen-Image-Edit 2x 无损加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1941503245764792443)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖cache-dit: Qwen-Image 1.5x 无损加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1938547315221705644)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖Cache加速-TaylorSeer算法简析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1937477466475197176)|@DefTruth|⭐️⭐️⭐|\n| [[Diffusion推理]📖DiT推理加速综述: Caching](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F711223667)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][基础]📖Triton极简入门: Triton Vector Add](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1902778199261291694)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][基础]📖Triton Fused Softmax Kernel详解: 从Python源码到PTX](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1899562146477609112)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][基础]📖vLLM Triton Merge Attention States Kernel详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1904937907703243110)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][进阶]📖vLLM Prefix Prefill Triton Kernel图解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695799736)|@DefTruth|⭐️⭐️⭐️|\n| [[张量\u002F序列并行]📖序列并行: BPT、Ring-Attention及Striped-Attention笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F6456708235)|@DefTruth|⭐️⭐️⭐|\n| [[vLLM实践][算子]📖vLLM算子开发流程：”保姆级“详细记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1892966682634473987)|@DefTruth|⭐️⭐️⭐|\n| [[vLLM实践][万字]📖vLLM + DeepSeek-R1 671B 多机部署及修Bug笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F29950052712)|@DefTruth|⭐️⭐️⭐|\n| [[Attention优化]📖FFPA(Split-D): FA2无限HeadDim扩展，2x↑🎉 vs SDPA EA](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F13975660308)|@DefTruth|⭐️⭐️⭐️|\n| [[CUDA基础][开篇]📖LeetCUDA: v3.0 大升级-面试刷题不迷路](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F19862356369)|@DefTruth|⭐️⭐️⭐⭐️|\n| [[分布式训推][张量\u002F序列并行]📖图解DeepSpeed-Ulysses&Megatron-LM TP\u002FSP](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F5750410146)|@DefTruth|⭐️⭐️|\n| [[VLM推理优化][InternVL系列]📖InternLM2\u002F...\u002FInternVL1.5系列笔记: 核心点解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F702481058)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][TensorRT-LLM][5w字]📖TensorRT-LLM部署调优-指北](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F699333691)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][KV Cache优化]📖GQA\u002FYOCO\u002FCLA\u002FMLKV: 层内和层间KV Cache共享](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F697311739)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Prefill优化][万字]📖图解vLLM Automatic Prefix Caching: TTFT优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F693556044)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][Attention优化]📖图解:从Online-Softmax到FlashAttention V1\u002FV2\u002FV3](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F668888063)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][Decoding优化]📖原理&图解FlashDecoding\u002FFlashDecoding++](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F696075602)|@DefTruth|⭐️⭐️|\n| [[VLM推理优化][LLaVA系列]📖CLIP\u002FLLaVA\u002FLLaVA1.5\u002FVILA笔记: 核心点解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F683137074)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Attention优化][万字]📖TensorRT MHA\u002FMyelin vs FlashAttention-2](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F678873216)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][PTX汇编]📖CUDA 12 PTX汇编: PRMT指令详解-通用模式](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F660630414)|@DefTruth|⭐️|\n| [[LLM推理优化][PTX汇编]📖CUDA 12 PTX汇编: LOP3指令详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F659741469)|@DefTruth|⭐️|\n| [[LLM推理优化][CUDA][3w字]📖高频面试题汇总-大模型手撕CUDA](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F678903537)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(00): 通俗易懂讲解-快速反量化算法](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657072856)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(01): PRMT指令详解及FT源码解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657070837)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(02): 快速反量化之INT8转BF16](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657073159)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(03): LOP3指令详解及INT4转FP16\u002FBF16](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657073857)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖100+篇: 大模型推理各方向新发展整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F693680304)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖30+篇: LLM推理论文集-500页PDF](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F669777159)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖FlashDecoding++: 比FlashDecoding还要快！](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F665022589)|@DefTruth|⭐️|\n| [[LLM推理优化][LLM Infra整理]📖TensorRT-LLM开源，TensorRT 9.1也来了](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F662361469)|@DefTruth|⭐️|\n| [[LLM推理优化][LLM Infra整理]📖20+篇: LLM推理论文集-300页PDF](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F658091768)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖PagedAttention论文新鲜出炉](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F617015570)|@DefTruth|⭐️|\n| [[推理部署][CV\u002FNLP]📖FastDeploy三行代码搞定150+ CV、NLP模型部署](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F581326442)|@DefTruth|⭐️|\n| [[推理部署][CV]📖如何在lite.ai.toolkit(3.6k+ stars)中增加您的模型？](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F523876625)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖美团 YOLOv6 ORT\u002FMNN\u002FTNN\u002FNCNN C++推理部署](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F533643238)|@DefTruth|⭐️⭐️|\n| [[推理部署][ONNX]📖ONNX推理加速技术文档-杂记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524023964)|@DefTruth|⭐️|\n| [[推理部署][TensorFlow]📖Mac源码编译TensorFlow C++指北](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524013615)|@DefTruth|⭐️|\n| [[推理部署][CV]📖1Mb!头部姿态估计: FSANet，一个小而美的模型(C++)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F447364201)|@DefTruth|⭐️|\n| [[推理部署][CV]📖opencv+ffmpeg编译打包全解指南](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F472115312)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖RobustVideoMatting视频抠图静态ONNX模型转换](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F459088407)|@DefTruth|⭐️|\n| [[推理部署][CV]📖190Kb!SSRNet年龄检测详细解读（含C++工程）](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F462762797)|@DefTruth|⭐️|\n| [[推理部署][CV]📖MGMatting(CVPR2021)人像抠图C++应用记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F464732042)|@DefTruth|⭐️|\n| [[推理部署][CV]📖超准确人脸检测(带关键点)YOLO5Face C++工程详细记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F461878005)|@DefTruth|⭐️⭐️|\n| [[推理部署][ORT]📖解决: ONNXRuntime(Python) GPU 部署配置记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F457484536)|@DefTruth|⭐️|\n| [[推理部署][CV]📖记录SCRFD(CVPR2021)人脸检测C++工程化(含docker镜像)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F455165568)|@DefTruth|⭐️⭐️|\n| [[推理部署][NCNN]📖野路子：记录一个解决onnx转ncnn时op不支持的trick](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F451446147)|@DefTruth|⭐️|\n| [[推理部署][CV]📖升级版NanoDet-Plus MNN\u002FTNN\u002FNCNN\u002FORT C++工程记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F450586647)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖超轻量级NanoDet MNN\u002FTNN\u002FNCNN\u002FORT C++工程记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F443419387)|@DefTruth|⭐️|\n| [[推理部署][CV]📖详细记录MGMatting之MNN、TNN和ORT C++移植](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F442949027)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖YOLOX NCNN\u002FMNN\u002FTNN\u002FONNXRuntime C++工程简记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F447364122)|@DefTruth|⭐️|\n| [[推理部署][TNN]📖手动修改YoloX的tnnproto记录-TNN](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F425668734)|@DefTruth|⭐️|\n| [[推理部署][ORT]📖全网最详细 ONNXRuntime C++\u002FJava\u002FPython 资料！](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F414317269)|@DefTruth|⭐️|\n| [[推理部署][CV]📖RobustVideoMatting: C++工程化记录-实现篇](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F413280488)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖RobustVideoMatting: C++工程化记录-应用篇](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F412491918)|@DefTruth|⭐️⭐️|\n| [[推理部署][ORT]📖ONNXRuntime C++ CMake 工程分析及编译](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F411887386)|@DefTruth|⭐️⭐️|\n| [[推理部署][ORT]📖如何使用ORT C++ API处理NCHW和NHWC输入？](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524230808)|@DefTruth|⭐️|\n| [[推理部署][TNN]📖tnn-convert搭建简记-YOLOP转TNN](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F431418709)|@DefTruth|⭐️|\n| [[推理部署][CV]📖YOLOP ONNXRuntime C++工程化记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F411651933)|@DefTruth|⭐️⭐️|\n| [[推理部署][NCNN]📖超有用NCNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449765328)|@DefTruth|⭐️|\n| [[推理部署][MNN]📖超有用MNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449761992)|@DefTruth|⭐️|\n| [[推理部署][TNN]📖超有用TNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449769615)|@DefTruth|⭐️|\n| [[推理部署][ONNX]📖超有用ONNX参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449773663)|@DefTruth|⭐️|\n| [[推理部署][ONNX]📖超有用ONNX模型结构参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449775926)|@DefTruth|⭐️|\n| [[推理部署][OpenCV-DNN]📖超有用OpenCV-DNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449778377)|@DefTruth|⭐️|\n| [[推理部署][Tensorflow]📖超有用Tensorflow C++工程化知识点](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449788027)|@DefTruth|⭐️|\n| [[推理部署][模型转换]📖深度学习模型转换资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449759361)|@DefTruth|⭐️|\n| [[技术随笔][C++][CMake]📖超有用CMake参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449779892)|@DefTruth|⭐️⭐️|\n| [[技术随笔][C++][3W字]📖静态链接和静态库实践指北-原理篇](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F595527528)|@DefTruth|⭐️⭐️⭐️|\n| [[技术随笔][C++]📖Mac下C++内存检查指北(Valgrind VS Asan)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F508470880)|@DefTruth|⭐️|\n| [[技术随笔][CV]📖torchlm: 人脸关键点检测库](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F467211561)|@DefTruth|⭐️⭐️|\n| [[技术随笔][ML]📖《统计学习方法-李航: 笔记-从原理到实现-基于R》](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F684885595)|@DefTruth|⭐️⭐️|\n| [[技术随笔][Git]📖如何优雅地git clone和git submodule？](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F639136221)|@DefTruth|⭐️|\n| [[技术随笔][3D]📖人脸重建3D参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524034741)|@DefTruth|⭐️|\n| [[技术随笔][3D]📖BlendShapes参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524036145)|@DefTruth|⭐️|\n| [[技术随笔][3D]📖从源码安装Pytorch3D详细记录及学习资料](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F512347464)|@DefTruth|⭐️|\n| [[技术随笔][ML]📖200页:《统计学习方法：李航》笔记 -从原理到实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F461520847)|@DefTruth|⭐️⭐️|\n\n### 📚 高性能计算与分布式-技术博客推荐 ([©️back👆🏻](#contents))\n\n\u003Cdiv id=\"other-blogs\">\u003C\u002Fdiv>\n\n💡说明: 本小节整理一些自己比较喜欢的文章。欢迎大家提PR推荐更多优秀的文章！\n\n|📖 类型-标题|📖 作者| 📖 推荐 |\n|:---|:---|:---|\n| [[cute系列详解][入门]📖cutlass cute 101](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F660379052)|@朱小霖|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖CUTLASS 2.x & CUTLASS 3.x Intro 学习笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F710516489)|@BBuf|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：tiled copy](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1930389542784964333)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：tiled mma](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1937145378446226159)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：Layout Compose & Inverse](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1962625273636845008)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程: Layout Product & Divide](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1971945267294111573)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：TMA Copy](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2003198909405763007)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给进阶开发的 CuTe 笔记：permutationMNK 参数](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1973526710105419953)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][Layout]📖cute 之 Layout](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F661182311)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Layout]📖cute Layout 的代数和几何解释](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F662089556)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Tensor]📖cute 之 Tensor](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F663093816)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][MMA]📖cute 之 MMA抽象](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F663092747)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Copy]📖cute 之 Copy抽象](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F666232173)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cute 之 Swizzle](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F671419093)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cute Swizzle细谈](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F684250988)|@进击的Killua|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cutlass swizzle机制解析（一）](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F710337546)|@Titus|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cutlass swizzle机制解析（二）](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F711398930)|@Titus|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖CUDA避免smem bank conflict的swizzle机制解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F4746910252)|@frankshi|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖布局代数实战：Swizzle自动推导](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1941306442683515068)|@melonedo|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖cute 之 简单GEMM实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F667521327)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖cute 之 GEMM流水线](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F665082713)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖cute 之 高效GEMM实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F675308830)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM流水线: single\u002Fmulti-stage、pipeline](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712451053)|@Titus|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM细节分析(一): ldmatrix的选择](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F702818267)|@Anonymous|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM细节分析(二): TiledCopy与cp.async](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F703560147)|@Anonymous|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM细节分析(三): Swizzle\u003CB,M,S>参数取值](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F713713957)|@Anonymous|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖Hopper Mixed GEMM的CUTLASS实现笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F714378343)|@BBuf|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖CUTLASS CuTe实战(一): 基础](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F690703999)|@进击的Killua|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖CUTLASS CuTe实战(二): 应用](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F692078624)|@进击的Killua|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖FlashAttention fp8实现（ada架构)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712314257)|@shengying.wei|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖FlashAttention 笔记: tiny-flash-attention解读](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F708867810)|@shengying.wei|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖使用cutlass cute复现flash attention](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F696323042)|@66RING|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖cutlass 基本认知](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F677616101)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖cutlass 软件架构](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F678915618)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖CUTLASS 基础介绍](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F671324125)|@进击的Killua|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖乱谈CUTLASS GTC2020 SLIDES](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F674693873)|@zzk again|⭐️⭐️⭐️|\n| [[cutlass教程][深入]📖cutlass block swizzle 和 tile iterator](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F679929705)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][深入]📖cutlass bank conflict free的smem layout](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F681966685)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][深入]📖cutlass 多级流水线](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F687397095)|@JoeNomad|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-前言](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F686198447)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-寄存器](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F688616037)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-Load和Cache](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F692445145)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-浮点运算](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695667044)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-整数运算](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F700921948)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-比特和逻辑操作](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712356884)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-Warp级和Uniform操作](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712357647)|@reed|⭐️⭐️⭐️|\n| [[CUDA优化][入门]📖CUDA 入门的正确姿势：how-to-optimize-gemm](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F478846788)|@白牛|⭐️⭐️⭐️|\n| [[CUDA优化][入门]📖CUDA（一）：CUDA 编程基础](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F645330027)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][入门]📖CUDA（二）：GPU的内存体系及其优化指南](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F654027980)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA（三）：通用矩阵乘法：从入门到熟练](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657632577)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(1)：LayerNorm 算子的 CUDA 实现与优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F694974164)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(2)：SoftMax算子的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695307283)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(3)：Cross Entropy 的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695594396)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(4)：AdamW 优化器的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695611950)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(5)：激活函数与残差连接的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695703671)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(6)：embedding 层与 LM head 层的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695785781)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(7)：self-attention 的 CUDA 实现及优化 (上)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695898274)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(8)：self-attention 的 CUDA 实现及优化 (下)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F696197013)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA（四）：使用 CUDA 实现 Transformer 结构](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F694416583)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][Copy]📖Async Copy及Memory Barrier指令的功能与实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F685168850)|@Frank Wang|⭐️⭐️⭐️|\n| [[CUDA优化][GEMV]📖深入浅出GPU优化系列：gemv优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F494144694)|@有了琦琦的棍子|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA element-wise 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1888630735520391519)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA transpose 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1899760505733756129)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA reduce 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1905661893739283464)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA GEMM 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1910636263666610461)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[Tensor Cores]📖Nvidia Tensor Core初探](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F620185229)|@木子知|⭐️⭐️⭐️|\n| [[Tensor Cores]📖Nvidia Tensor Core-WMMA API编程入门](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F620766588)|@木子知|⭐️⭐️⭐️|\n| [[Tensor Cores]📖Nvidia Tensor Core-MMA PTX编程入门](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F621855199)|@木子知|⭐️⭐️⭐️|\n| [[Tensor Cores]📖CUDA Ampere Tensor Core HGEMM 矩阵乘法优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F555339335)|@nicholaswilde|⭐️⭐️⭐️|\n| [[GPU通信架构][精解]📖NVIDIA GPGPU（四）- 通信架构](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F680262016)|@Bruce|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: 介绍](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F9418379234)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: TorchDynamo](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F9640728231)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: AOTAutograd](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F9997263922)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: TorchInductor](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F11224299472)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: 算子融合](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F21053905491)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][实践]📖Torch.compile使用指南](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F620163218)|@jhang|⭐️⭐️⭐️|\n| [[torch.compile][实践]📖Torch.compile详细示例解析教程](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F855291863)|@Bbuf|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖一文搞懂TorchDynamo原理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F630933479)|@吾乃阿尔法|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖理解torch.compile基本原理和使用方式](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F12712224407)|@俯仰|⭐️⭐️⭐️|\n\n## ©️License ([©️back👆🏻](#contents))\n\n\u003Cdiv id=\"License\">\u003C\u002Fdiv>\n\nGNU General Public License v3.0\n\n## 🎉Contribute ([©️back👆🏻](#contents))\n\n\u003Cdiv id=\"contribute\">\u003C\u002Fdiv>\n\nHow to contribute? Star this repo or check [🌤🌤CONTRIBUTE🎉🎉](.\u002FCONTRIBUTE.md).\n\n\u003Cdiv align='center'>\n\u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#xlite-dev\u002FLeetCUDA&Date\">\n \u003Cpicture>\n   \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_f9b66eb04f5d.png&theme=dark\" \u002F>\n   \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_f9b66eb04f5d.png\" \u002F>\n   \u003Cimg width=400 height=300 alt=\"Star History Chart\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_f9b66eb04f5d.png\" \u002F>\n \u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n## 📖 References ([©️back👆🏻](#contents))\n\u003Cdiv id=\"ref\">\u003C\u002Fdiv>\n\n- [flash-attention-minimal](https:\u002F\u002Fgithub.com\u002Ftspeterkim\u002Fflash-attention-minimal)\n- [tiny-flash-attention](https:\u002F\u002Fgithub.com\u002F66RING\u002Ftiny-flash-attention)\n- [cute-gemm](https:\u002F\u002Fgithub.com\u002Freed-lau\u002Fcute-gemm)\n- [cutlass_flash_atten_fp8](https:\u002F\u002Fgithub.com\u002Fweishengying\u002Fcutlass_flash_atten_fp8)\n- [cuda_learning](https:\u002F\u002Fgithub.com\u002Fifromeast\u002Fcuda_learning)\n- [cuda_hgemm](https:\u002F\u002Fgithub.com\u002FBruce-Lee-LY\u002Fcuda_hgemm)\n- [cuda-tensorcore-hgemm](https:\u002F\u002Fgithub.com\u002Fnicolaswilde\u002Fcuda-tensorcore-hgemm)\n- [How_to_optimize_in_GPU](https:\u002F\u002Fgithub.com\u002FLiu-xiandong\u002FHow_to_optimize_in_GPU\u002Ftree\u002Fmaster\u002Fsgemv)\n- [how-to-optim-algorithm-in-cuda](https:\u002F\u002Fgithub.com\u002FBBuf\u002Fhow-to-optim-algorithm-in-cuda)\n- [cute_gemm](https:\u002F\u002Fgithub.com\u002Fweishengying\u002Fcute_gemm)\n- [cutlass](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass)\n","\u003Cdiv align=\"center\">\n  \u003Cp align=\"center\">\n    \u003Ch2>📚 LeetCUDA：面向初学者的现代 CUDA 学习笔记，结合 PyTorch 🐑\u003C\u002Fh2>\n    \u003Cimg src='https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_cae076e970b2.png' >\n  \u003C\u002Fp>\n  \u003Cdiv align='center'>\n      \u003Cimg src=https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLanguage-CUDA-brightgreen.svg >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002Fxlite-dev\u002FLeetCUDA.svg?style=dark >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fxlite-dev\u002FLeetCUDA.svg?style=dark >\n      \u003Cimg src=https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-GPLv3.0-turquoise.svg >\n      \u003Ca href=\"https:\u002F\u002Fhellogithub.com\u002Frepository\u002F98348655a96640ca8ddcbc298edc901d\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fapi.hellogithub.com\u002Fv1\u002Fwidgets\u002Frecommend.svg?rid=98348655a96640ca8ddcbc298edc901d&claim_uid=ofSCbzTmdeQk3FD&theme=small\" alt=\"精选｜HelloGitHub\" \u002F>\u003C\u002Fa>\n  \u003C\u002Fdiv>\n\u003C\u002Fdiv>\n\n📚 **LeetCUDA**：包含 **Tensor\u002FCUDA 核心、TF32\u002FF16\u002FBF16\u002FF8**，以及使用 PyTorch 的 [📖200+ CUDA 内核🔥](#cuda-kernel)，[📖100+ LLM\u002FCUDA🔥](#my-blogs-part-1) 博文，还有能够达到 **cuBLAS** `98%~100%` TFLOPS 性能的 [📖HGEMM⚡️](.\u002Fkernels\u002Fhgemm)，以及利用 Tensor Cores 和纯 MMA PTX 实现的 [📖flash-attn⚡️](.\u002Fkernels\u002Fflash-attn)。♥️ 如果可以的话，请为我点个 ⭐️ 星来支持一下吧，兄弟 ~ ♥️\n\n\u003Cdiv align=\"center\">\n  \u003Cp align=\"center\">\n    \u003Ca href=\"#contribute\">🔥🔥 欢迎 PR：将你的内核加入 LeetCUDA！让我们一起让它变得更棒吧！🎉🎉\u003C\u002Fa> \u003Cbr>\n    \u003Ca href=https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fgraphs\u002Fcontributors > \u003Cimg src=https:\u002F\u002Fopencollective.com\u002Fleetcuda\u002Fcontributors.svg height=40px > \u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n## ©️引用🎉🎉\n\n```BibTeX\n@misc{LeetCUDA@2025,\n  title={LeetCUDA: A Modern CUDA Learn Notes with PyTorch for Beginners},\n  url={https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA.git},\n  note={Open-source software available at https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA.git},\n  author={DefTruth and Many Others},\n  year={2025}\n}\n```\n\n\n## 📖 新闻 🔥🔥\n\u003Cdiv id=\"news\">\u003C\u002Fdiv>\n\n- [2026\u002F03] Cache-DiT **[🎉v1.3.0](https:\u002F\u002Fgithub.com\u002Fvipshop\u002Fcache-dit)** 已发布，主要更新包括：带有 [batched P2P](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FCONTEXT_PARALLEL) 的 [Ring](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FCONTEXT_PARALLEL) 注意力，[USP](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FCONTEXT_PARALLEL\u002F)（混合 Ring 和 Ulysses），混合 2D 和 3D 并行计算（💥[USP + TP](https:\u002F\u002Fcache-dit.readthedocs.io\u002Fen\u002Flatest\u002Fuser_guide\u002FHYBRID_PARALLEL\u002F)），以及 VAE-P 通信开销的降低。\n\n![arch](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_de7e08501500.png)\n\n- [2026\u002F04]: **[🤖ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn.git)** 发布了！这是又一个更快的 Flash 预填充注意力机制，具有 O(1)🎉SRAM 复杂度，适用于大头维度，相比 SDPA EA 提升了 **1.8x~3x↑**🎉：[📈L20 ~1.9x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-l20)，[📈A30 ~1.8x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-a30)，[📈4090 ~2.1x↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-4090)。目前，FFPA 支持自注意力、交叉注意力、分组\u002F多查询注意力以及带因果关系的大头维度注意力（D=320~1024）。而标准的 FlashAttention-2 只支持头维度不超过 256 的情况。\n\n\u003Cdiv align='center'>\n\u003Cimg height=\"320px\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_c22d74ae7cfd.png\" \u002F>\n\u003C\u002Fdiv>\n\n- [2024\u002F12]: **[⚡️HGEMM](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FHGEMM.git)** 发布了！我们从零开始使用 Tensor Cores 结合 **WMMA、MMA 和 CuTe** API 编写了 HGEMM，实现了峰值🎉性能。\n\n## 📖 目录\n\u003Cdiv id=\"contents\">\u003C\u002Fdiv>\n\n- [📖 HGEMM-MMA 🎉🎉](#HGEMM-bench)\n- [📖 FlashAttention-MMA 🎉🎉](#fa-mma-bench)\n  - [📚 Split KV（基础版，FA-1）](#mma-split-kv)\n  - [📚 Split Q（加速版，FA-2）](#mma-split-q)\n  - [📚 Split Q + Shared KV](#mma-share-kv)\n  - [📚 Split Q + Shared QKV](#mma-share-qkv)\n  - [📚 Split Q + QK Tiling](#mma-tiling-qk)\n  - [📚 Split Q + QKV Tiling](#mma-tiling-qkv)\n- [📖 200+ CUDA 内核 🔥🔥](#cuda-kernel)\n  - [📚 简单 ⭐️](#cuda-kernel-easy-medium)\n  - [📚 中等 ⭐️⭐️](#cuda-kernel-easy-medium)\n  - [📚 困难 ⭐️⭐️⭐️](#cuda-kernel-hard)\n  - [📚 困难+ ⭐️⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n  - [📚 困难++ ⭐⭐⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n  - [📚 Triton ⭐⭐⭐️](#triton-kernel)\n  - [📚 CUTLASS ⭐⭐⭐️](#cutlass-kernel)\n- [📖 100+ LLM\u002FCUDA 博文 🔥](#my-blogs-part-1)\n- [📖 如何贡献 👀👇](#contribute)\n\n\n## 📖 HGEMM 基准测试 🎉🎉\n\n\u003Cdiv id=\"HGEMM-bench\">\u003C\u002Fdiv>\n\n目前，在 NVIDIA L20、RTX 4090 和 RTX 3080 笔记本上，与 cuBLAS 默认的 Tensor Cores 算法相比，本仓库中的 `HGEMM (WMMA\u002FMMA\u002FCuTe)`（蓝色🔵）可以达到其（橙色🟠）性能的 `98%~100%`。更多详情请查看 [toy-hgemm 库⚡️⚡️](.\u002Fkernels\u002Fhgemm) 或 [HGEMM⚡️⚡️](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FHGEMM) 仓库。\n\n![toy-hgemm-library](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_d369fb27d66b.png)\n\n|📚特性 |📚特性 |📚特性 |📚特性|\n|:---:|:---:|:---:|:---:|\n|✔️CUDA\u002F**Tensor Cores**|✔️循环遍历 K|✔️分块(BMxBK)|✔️线程分块(T 8x8)|\n|✔️WMMA(m16n16k16)|✔️MMA(m16n8k16)|✔️LDST打包(128位)|✔️SMEM填充|\n|✔️异步复制|✔️分块 MMA|✔️分批 Warp|✔️**多阶段(2~4)**|\n|✔️寄存器双缓冲|✔️**块置换**|✔️**Warp置换**|✔️**SMEM置换**（CuTe\u002FMMA）|\n|✔️集体存储(Shfl)|✔️NN布局|✔️TN布局|✔️SGEMM FP32\u002FTF32|\n\n## 📖 FA2-MMA 基准测试 🎉🎉\n\n\u003Cdiv id=\"fa-mma-bench\">\u003C\u002Fdiv>\n\n我还使用纯 MMA PTX 指令实现了 **FlashAttention-2**，它支持多阶段、分块 MMA、分块线程束、共享 KV SMEM、**完全共享 QKV SMEM**、**预取 Q s2r**、**预取 K\u002FV g2s**、**QKV 精细粒度分块**、集体存储等功能。更多详情请参阅 [flash-attn⚡️⚡️](.\u002Fkernels\u002Fflash-attn)。\n\n![flash-attn-mma](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_25f231f22aea.png)\n\n|📚功能 |📚功能 |📚功能 |📚功能|\n|:---:|:---:|:---:|:---:|\n|✔️张量核心|✔️循环遍历 N\u002FD |✔️分块(Br, Bc)|✔️MMA(m16n8k16)|\n|✔️打包 LDST(128 位)|✔️SMEM **Swizzle**\u002F填充 |✔️异步复制|✔️分块 MMAs|\n|✔️分块线程束|✔️多阶段(1\u002F2)|✔️集体存储(Shfl)|✔️**拆分 KV\u002FQ**|\n|✔️**共享 QKV** SMEM|✔️**预取 Q** s2r|✔️**预取 KV** g2s|✔️**QKV 精细粒度分块**|\n\n目前，对于小规模注意力 `(B\u003C=4, H \u003C=48, SeqLen \u003C= 8192, D \u003C= 64)`，在某些设备上它的运行速度可以超过 FA2\u002FSDPA。例如，在 NVIDIA RTX 3080 笔记本电脑上，[📚 拆分 Q + 完全共享 QKV SMEM](#mma-share-qkv) 方法可以达到 **55 TFLOPS (D=64)**，几乎比 FA2 快 **~1.5x** 🎉。而在 NVIDIA L20 上，🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn) 方法可以达到 **104 TFLOPS (D=512)**，几乎比 SDPA（高效注意力）快 **~1.8x** 🎉。然而，对于大规模注意力，性能仍存在差距。敬请期待后续更新 ~（MMA 精度 F16\u002FF32，softmax 精度 F32 与 FA2 MMA\u002Fsoftmax 精度 F32，👇基准测试）\n\n|算法| (B,H,N,D) | RTX 3080 笔记本电脑 | L20 | RTX 4090 |\n|:---:|:---:|:---:|:---:|:---:|\n|FlashAttention-2|(1,8,8192,64)|37 TFLOPS|100 TFLOPS|145 TFLOPS|\n|share-qkv+stage2|(1,8,8192,64)|**55 TFLOPS**|99 TFLOPS|**221 TFLOPS**|\n|FlashAttention-2|(1,48,8192,64)|37 TFLOPS|109 TFLOPS|163 TFLOPS|\n|share-qkv+stage2|(1,48,8192,64)|**48 TFLOPS**|107 TFLOPS|**224 TFLOPS**|\n|SDPA（高效注意力）|(1,48,8192,512)|16 TFLOPS|58 TFLOPS|85 TFLOPS|\n|🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|(1,48,8192,512)|**39 TFLOPS**|**104 TFLOPS**|**200 TFLOPS**|\n|精度误差 vs FA2\u002FSDPA| \u002F | 最大：\u003C ~1e-3 | 最小：~0.0 | 平均：\u003C ~1e-5 |\n\n在 [flash-attn⚡️⚡️](.\u002Fkernels\u002Fflash-attn) 中已经实现了 `Split KV` 和 `Split Q` 的实现，用于性能对比。其中，将所有 QKV 在 MMA（线程束）之间拆分的 `Split KV` 方法，比仅将 Q 在 MMA（线程束）之间拆分而保留所有 MMA（线程束）对 KV 的访问权限的 `Split Q` 方法要慢。\n\n- 📚 Split KV（基础版，FlashAttention-1）\n\u003Cdiv id=\"mma-split-kv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F 将 QKV 在 MMA（线程束）之间拆分，采用朴素的矩阵乘法 MMA&线程束分块策略。\n\u002F\u002F 案例：8 个 MMA（2x4）的布局 [之后] kWarpTileSeqLenQxkWarpTileSeqLenK（2x2）-> 32x2,32x2=64x64：\n\u002F\u002F |  [64,64]  |    warp_KV 0    |    warp_KV 1    |    warp_KV 2    |    warp_KV 3    |\n\u002F\u002F | warp_QP 0 |-- MMA 0,MMA 0 --|-- MMA 2,MMA 2 --|-- MMA 4,MMA 4 --|-- MMA 6,MMA 6 --|\n\u002F\u002F | warp_QP 0 |-- MMA 0,MMA 0 --|-- MMA 2,MMA 2 --|-- MMA 4,MMA 4 --|-- MMA 6,MMA 6 --|\n\u002F\u002F | warp_QP 1 |-- MMA 1,MMA 1 --|-- MMA 3,MMA 2 --|-- MMA 5,MMA 5 --|-- MMA 7,MMA 7 --|\n\u002F\u002F | warp_QP 1 |-- MMA 1,MMA 1 --|-- MMA 3,MMA 2 --|-- MMA 5,MMA 5 --|-- MMA 7,MMA 7 --|\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_kv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q（更快，FlashAttention-2）\n\u003Cdiv id=\"mma-split-q\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F 将 Q 在 MMA（线程束）之间拆分，并保持所有 MMA（线程束）对 KV 的访问权限，\n\u002F\u002F 以减少通过 SMEM 和线程束交换带来的通信开销。\n\u002F\u002F 案例：MMA = m16n8k16，Br=16x4=64，Bc=8x8=64，布局：4 个线程束\n\u002F\u002F |   64x64   |      warp_KV 0       |\n\u002F\u002F | warp_QP 0 | MMA 0 ... MMA 0 (x8) |\n\u002F\u002F | warp_QP 1 | MMA 1 ... MMA 1 (x8) |\n\u002F\u002F | warp_QP 2 | MMA 2 ... MMA 2 (x8) |\n\u002F\u002F | warp_QP 3 | MMA 3 ... MMA 3 (x8) |\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + 共享 KV SMEM (**1\u002F2 SRAM** vs FA2）\n\u003Cdiv id=\"mma-share-kv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F K 和 V 共享同一片共享内存，从而提高块占用率。\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_shared_kv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + 完全共享 QKV SMEM (**1\u002F4 SRAM** vs FA2）\n\n\u003Cdiv id=\"mma-share-qkv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F Q、K 和 V 完全共享同一片共享内存，并预取 Q s2r，从而提高块占用率\n\u002F\u002F 同时减少 Q 对 SMEM 的访问次数。\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_shared_qkv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + QK 精细粒度分块（**O(16xd) SRAM** vs FA2 **O(4xBrxd) SRAM**，`Headdim -> 1024`）\n\n\u003Cdiv id=\"mma-tiling-qk\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F 在 MMA 层面对 Q@K^T 进行精细粒度分块，使得 Q 和 K 的 SRAM 使用量恒定为\n\u002F\u002F 64 * kMmaAtomK。对于 V，SRAM 复杂度为 O(kMmaAtomK * d)，因此整体 SRAM 复杂度为 O(kMmaAtomK * d)。由此，这种方法允许我们将 D（头维度）扩展到 1024。\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_tiling_qk_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n- 📚 Split Q + 完全 QKV 精细粒度分块（**O(2xBrx16)~O(1) SRAM** vs FA2 **O(4xBrxd) SRAM**）\n\n\u003Cdiv id=\"mma-tiling-qkv\">\u003C\u002Fdiv>\n\n```C++\n\u002F\u002F 在 MMA 层面对所有的 Q@K^T 和 P@V 进行精细粒度分块，使得 Q、K 和 V 的 SRAM 使用量恒定为\n\u002F\u002F Br * 16 或 Bc * 16，整体 SRAM 复杂度为 O(Br * 16)。因此，这种方法让我们能够以比 SDPA 更快的速度运行，无论是否使用 MMA F32 精度。\n__global__ void \u002F\u002F Q, K, V, O -> [B, H, N, D]\nflash_attn_mma_stages_split_q_tiling_qkv_kernel(half* Q, half* K, half* V, half* O, ...);\n```\n\n💡注意：[📚 拆分 Q + 完全 QKV 精细粒度分块](#mma-tiling-qkv) 已经被重构到 🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn) 中。\n\n## 📖 200+ CUDA 核函数 🔥🔥（简单 -> 困难++）（[©️返回👆🏻](#contents)）\n\n\u003Cdiv id=\"cuda-kernel\">\u003C\u002Fdiv>\n\n此处列出的核函数将引导你逐步学习，从简单到非常具有挑战性的主题。每个主题的**工作流程**如下：自定义**CUDA 核函数**实现 -> PyTorch **Python 绑定** -> 运行测试。👉提示：`*` = Tensor Core（WMMA、MMA、CuTe），否则为 CUDA Core；`\u002F` = 不支持；`✔️` = 支持；`❔` = 待办。内容列表如下：\n\n- [📚 简单 ⭐️](#cuda-kernel-easy-medium)\n- [📚 中等 ⭐️⭐️](#cuda-kernel-easy-medium)\n- [📚 困难 ⭐️⭐️⭐️](#cuda-kernel-hard)\n- [📚 困难+ ⭐️⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n- [📚 困难++ ⭐⭐⭐️⭐️⭐️](#cuda-kernel-hard-plus)\n- [📚 Triton ⭐⭐⭐️](#triton-kernel)\n- [📚 CUTLASS ⭐⭐⭐️](#cutlass-kernel)\n\n[📚 简单](#cuda-kernel-easy-medium) 和 [📚 中等](#cuda-kernel-easy-medium) 部分涵盖了诸如 `逐元素运算、矩阵转置、warp\u002Fblock 归约、NMS、ReLU、GELU、Swish、层归一化、RMS 归一化、在线 Softmax、点积、嵌入` 等操作，以及 `FP32`、`FP16`、`BF16` 和 `FP8` 的基本用法。[📚 困难](#cuda-kernel-hard)、[📚 困难+](#cuda-kernel-hard-plus) 和 [📚 困难++](#cuda-kernel-hard-plus) 部分则深入探讨更高级的主题，主要聚焦于 `SGEMV、SGEMM、Hgemv、Hgemm 和 Flash Attention` 等操作。这些部分还提供了大量使用 Tensor Core 和纯 MMA PTX 实现的核函数。\n\n### 📚 简单 ⭐️ & 中等 ⭐️⭐️  ([©️返回👆🏻](#cuda-kernel))\n\u003Cdiv id=\"cuda-kernel-easy-medium\">\u003C\u002Fdiv>\n\n|📖 CUDA 内核 | 📖 元素数据类型 | 📖 累加器数据类型 | 📖 文档 | 📖 级别 |\n|:---|:---|:---|:---|:---|\n| ✔️ [elementwise_f32](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f32x4](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16x2](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16x8](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️|\n| ✔️ [elementwise_f16x8_pack](.\u002Fkernels\u002Felementwise\u002Felementwise.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felementwise\u002F)|⭐️⭐️|\n| ✔️ [histogram_i32](.\u002Fkernels\u002Fhistogram\u002Fhistogram.cu)|i32|\u002F|[link](.\u002Fkernels\u002Fhistogram\u002F)|⭐️|\n| ✔️ [histogram_i32x4](.\u002Fkernels\u002Fhistogram\u002Fhistogram.cu)|i32|\u002F|[link](.\u002Fkernels\u002Fhistogram\u002F)|⭐️|\n| ✔️ [sigmoid_f32](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f32x4](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16x2](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16x8](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️|\n| ✔️ [sigmoid_f16x8_pack](.\u002Fkernels\u002Fsigmoid\u002Fsigmoid.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fsigmoid\u002F)|⭐️⭐️|\n| ✔️ [relu_f32](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f32x4](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16x2](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16x8](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️|\n| ✔️ [relu_f16x8_pack](.\u002Fkernels\u002Frelu\u002Frelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Frelu\u002F)|⭐️⭐️|\n| ✔️ [elu_f32](.\u002Fkernels\u002Felu\u002Felu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f32x4](.\u002Fkernels\u002Felu\u002Felu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16x2](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16x8](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️|\n| ✔️ [elu_f16x8_pack](.\u002Fkernels\u002Felu\u002Felu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Felu\u002F)|⭐️⭐️|\n| ✔️ [gelu_f32](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f32x4](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16x2](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16x8](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️|\n| ✔️ [gelu_f16x8_pack](.\u002Fkernels\u002Fgelu\u002Fgelu.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fgelu\u002F)|⭐️⭐️|\n| ✔️ [swish_f32](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f32x4](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16x2](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16x8](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️|\n| ✔️ [swish_f16x8_pack](.\u002Fkernels\u002Fswish\u002Fswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fswish\u002F)|⭐️⭐️|\n| ✔️ [hardswish_f32](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f32x4](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16x2](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16x8](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️|\n| ✔️ [hardswish_f16x8_pack](.\u002Fkernels\u002Fhardswish\u002Fhardswish.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardswish\u002F)|⭐️⭐️|\n| ✔️ [hardshrink_f32](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f32x4](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16x2](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16x8](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️|\n| ✔️ [hardshrink_f16x8_pack](.\u002Fkernels\u002Fhardshrink\u002Fhardshrink.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fhardshrink\u002F)|⭐️⭐️|\n| ✔️ [embedding_f32](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f32x4](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f32x4_pack](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16x2](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16x8](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️|\n| ✔️ [embedding_f16x8_pack](.\u002Fkernels\u002Fembedding\u002Fembedding.cu)|f16|\u002F|[link](.\u002Fkernels\u002Fembedding\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_f32_col2row{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️|\n| ✔️ [mat_trans_f32_row2col{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️|\n| ✔️ [mat_trans_f32_diagonal2d](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_f32x4_col2row{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_f32x4_row2col{2d}](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [mat_trans_cute](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose_cute.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [warp_reduce_{all}](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|all|all|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f32_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f32|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f32x4_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f32|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x2_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x2_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x8_pack_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_f16x8_pack_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|f16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16_bf16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|bf16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x2_bf16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|bf16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x2_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x8_pack_bf16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|bf16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_bf16x8_pack_f32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|bf16|f32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e4m3_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e4m3|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e5m2_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e5m2|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e4m3x16_pack_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e4m3|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_fp8_e5m2x16_pack_f16](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|fp8_e5m2|f16|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️⭐️|\n| ✔️ [block_all_reduce_i8_i32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|i8|i32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [block_all_reduce_i8x16_pack_i32](.\u002Fkernels\u002Freduce\u002Fblock_all_reduce.cu)|i8|i32|[link](.\u002Fkernels\u002Freduce\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f32|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f32x4](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f32|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f16_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f16|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f16x2_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f16|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [dot_product_f16x8_pack_f32](.\u002Fkernels\u002Fdot-product\u002Fdot_product.cu)|f16|f32|[link](.\u002Fkernels\u002Fdot-product\u002F)|⭐️⭐️|\n| ✔️ [softmax_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [softmax_f32x4_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f32x4_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f16_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f16|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f16x2_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f16|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [safe_softmax_f16x8_pack_f32_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f16|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [online_safe_softmax_f32_per_token](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [online_safe_softmax_f32x4_pack_per_tok](.\u002Fkernels\u002Fsoftmax\u002Fsoftmax.cu)|f32|f32|[link](.\u002Fkernels\u002Fsoftmax\u002F)|⭐️⭐️|\n| ✔️ [rope_f32](.\u002Fkernels\u002Frope\u002Frope.cu)|f32|f32|[link](.\u002Fkernels\u002Frope\u002F)|⭐️⭐️|\n| ✔️ [rope_f32x4_pack](.\u002Fkernels\u002Frope\u002Frope.cu)|f32|f32|[link](.\u002Fkernels\u002Frope\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f32](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f32x4](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x2_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x8_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x8_pack_f16](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16x8_pack_f32](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [layer_norm_f16_f32](.\u002Fkernels\u002Flayer-norm\u002Flayer_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Flayer-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f32x4](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f32|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x2_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_pack_f16](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f16|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16x8_pack_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [rms_norm_f16_f32](.\u002Fkernels\u002Frms-norm\u002Frms_norm.cu)|f16|f32|[link](.\u002Fkernels\u002Frms-norm\u002F)|⭐️⭐️|\n| ✔️ [nms_f32](.\u002Fkernels\u002Fnms\u002Fnms.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fnms)|⭐️⭐️|\n| ✔️ [merge_attn_states](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states\u002Fcuda_merge_attn_states.cu)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states)|⭐️⭐️|\n| ✔️ [notes v1(deprecated)](.\u002Fkernels\u002Fnotes-v1.cu)|f32|f32|\u002F|⭐️⭐️|\n| ✔️ [How to use nsys\u002Fncu(timeline\u002Fptx\u002Fsass)](.\u002Fkernels\u002Fnvidia-nsight\u002F)|\u002F|\u002F|[link](.\u002Fkernels\u002Fnvidia-nsight\u002F)|⭐️⭐️|\n\n### 📚 困难 ⭐⭐⭐️（[©️返回👆🏻](#cuda-kernel)）\n\n\u003Cdiv id=\"cuda-kernel-hard\">\u003C\u002Fdiv>\n\n|📖 CUDA 内核| 📖 元素数据类型| 📖 累加器数据类型| 📖 文档 | 📖 难度 |\n|:---|:---|:---|:---|:---|\n| ✔️ [sgemv_k32_f32](.\u002Fkernels\u002Fsgemv\u002Fsgemv.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemv_k128_f32x4](.\u002Fkernels\u002Fsgemv\u002Fsgemv.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemv_k16_f32](.\u002Fkernels\u002Fsgemv\u002Fsgemv.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_k32_f16](.\u002Fkernels\u002Fhgemv\u002Fhgemv.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_k128_f16x4](.\u002Fkernels\u002Fhgemv\u002Fhgemv.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_k16_f16](.\u002Fkernels\u002Fhgemv\u002Fhgemv.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_naive_f32](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️|\n| ✔️ [sgemm_sliced_k_f32](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k_f32x4](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k...bcf](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k...dbuf](.\u002Fkernels\u002Fsgemm\u002Fsgemm.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k16...dbuf](.\u002Fkernels\u002Fsgemm\u002Fsgemm_async.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_t_8x8_sliced_k16...async](.\u002Fkernels\u002Fsgemm\u002Fsgemm_async.cu)|f32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_wmma_m16n16k8...stages*](.\u002Fkernels\u002Fsgemm\u002Fsgemm_wmma_tf32_stage.cu)|tf32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [sgemm_wmma_m16n16k8...swizzle*](.\u002Fkernels\u002Fsgemm\u002Fsgemm_wmma_tf32_stage.cu)|tf32|f32|[链接](.\u002Fkernels\u002Fsgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_naive_f16](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️|\n| ✔️ [hgemm_sliced_k_f16](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k_f16x4](.\u002Fkernels\u002Fhgemm\u002Fhgemm.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k_f16x4_pack](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k_f16x8_pack](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8x8_sliced_k...dbuf](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8\u002F16x8...k16\u002F32...dbuf](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm_async.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_t_8\u002F16x8...k16\u002F32...async](.\u002Fkernels\u002Fhgemm\u002Fnaive\u002Fhgemm_async.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...naive*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...mma4x2*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...mma4x4*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...dbuf*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m32n8k16....dbuf*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...stages*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma_stage.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wmma_m16n16k16...swizzle*](.\u002Fkernels\u002Fhgemm\u002Fwmma\u002Fhgemm_wmma_stage.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...naive*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...mma2x4*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...stages*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma_stage.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...swizzle*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fbasic\u002Fhgemm_mma_stage.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...swizzle{smem}*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fswizzle\u002Fhgemm_mma_stage_swizzle.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_m16n8k16...swizzle{tn}{smem}*](.\u002Fkernels\u002Fhgemm\u002Fmma\u002Fswizzle\u002Fhgemm_mma_stage_tn_swizzle_x4.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_stages_swizzle{smem}...cute*](.\u002Fkernels\u002Fhgemm\u002Fcutlass\u002Fhgemm_mma_stage_tn_cute.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_cublas*](.\u002Fkernels\u002Fhgemm\u002Fcublas\u002Fhgemm_cublas.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️|\n| ✔️ [hgemm_wgmma_m64n128k16...tma{ws}{tn}*](.\u002Fkernels\u002Fhgemm\u002Fwgmma\u002Fhgemm_wgmma_fp16acc_stages_tn.cu)|f16|f16|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_wgmma_m64n128k16_fp32...tma*](.\u002Fkernels\u002Fhgemm\u002Fwgmma\u002Fhgemm_wgmma_fp32acc_stages_tn.cu)|f16|f32|[链接](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n\n### 📚 更难 ⭐️⭐️⭐️⭐️ 和 非常难 ⭐️⭐️⭐️⭐️⭐️（[©️返回👆🏻](#cuda-kernel)）\n\n- 📚 FlashAttention-2 MMA（MMA 累加器 F32\u002FF16，swizzle，QKV 共享片上存储，细粒度分块等🎉）\n\n\u003Cdiv id=\"cuda-kernel-hard-plus\">\u003C\u002Fdiv>\n\n|📖 CUDA 内核 | 📖 元素数据类型 | 📖 累加数据类型 | 📖 文档 | 📖 难度 |\n|:---|:---|:---|:---|:---|\n| ✔️ [flash_attn_cute(朴素)](.\u002Fkernels\u002Fflash-attn\u002Fcutlass\u002Fflash_attn_cute.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn\u002F) | ⭐️⭐️⭐️ |\n| ✔️ [如何实现 MMA 共享内存置换*](.\u002Fkernels\u002Fswizzle\u002Fmma_simple_swizzle.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fswizzle) | ⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段分割 KV*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_split_kv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段分割 Q*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_split_q.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…共享 KV*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_kv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…共享 QKV*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_qkv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…分块 QK*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qk.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…分块 QKV*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qkv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…共享 KV{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_kv_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…共享 QKV{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_share_qkv_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…分块 QK{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qk_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma 阶段…分块 QKV{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fbasic\u002Fflash_attn_mma_tiling_qkv_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 KV{f32}{rr}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fothers\u002Fflash_attn_mma_share_kv_F32F16F16F32_rr.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 QKV{f32}{rr}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fothers\u002Fflash_attn_mma_share_qkv_F32F16F16F32_rr.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 KV 置换{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_kv_swizzle_q.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 KV 置换{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_kv_swizzle_qk.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 KV 置换{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_kv_swizzle_qkv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 QKV 置换{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_qkv_swizzle_q.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 QKV 置换{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_qkv_swizzle_qk.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…共享 QKV 置换{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_share_qkv_swizzle_qkv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…分块 QK 置换{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qk_swizzle_q.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…分块 QK 置换{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qk_swizzle_qk.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…分块 QK 置换{qkv}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qk_swizzle_qkv.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…分块 QKV 置换{q}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_q.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn_mma…分块 QKV 置换{qk}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qk.cu) | f16 | f16 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn…分块 QKV 置换{qkv}{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_q_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn…分块 QKV 置换{qk}{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qk_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n| ✔️ [flash_attn…分块 QKV 置换{qkv}{f32}*](.\u002Fkernels\u002Fflash-attn\u002Fmma\u002Fswizzle\u002Fflash_attn_mma_tiling_qkv_swizzle_qkv_F32F16F16F32.cu) | f16 | f32 | [链接](.\u002Fkernels\u002Fflash-attn) | ⭐️⭐️⭐️⭐️ |\n\n💡注：**rr** 表示减少寄存器使用（适用于 `d>128`）；**f32** 表示 MMA 累加时使用 FP32 数据类型，否则为 FP16。softmax 的累加数据类型始终为 FP32 以保证高精度；**swizzle** 目前仅支持 MMA 的共享内存置换。\n\n- 📚 FFPA 注意力 MMA（比 SDPA EA 快 **1.8x~3x**🎉，D > 256，不支持 FA2）\n\n|📖 CUDA内核| 📖 元素数据类型| 📖 累加器数据类型| 📖 文档 | 📖 难度 |\n|:---|:---|:---|:---|:---|\n| ✔️ [ffpa_mma_stages_split_q_L1_F16F16F16](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F16_L1.cu)|f16|f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [ffpa_mma_stages_split_q_L1_F16F16F32](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L1.cu)|f16|f32|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ✔️ [ffpa_mma_stages_split_q_L1_mixed_acc](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L1.cu)|f16|QK f32, PV f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L2_F16F16F16](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F16_L2.cu)|f16|f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L2_F16F16F32](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L2.cu)|f16|f32|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L2_mixed_acc](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L2.cu)|f16|QK f32, PV f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L3_F16F16F16](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F16_L3.cu)|f16|f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L3_F16F16F32](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L3.cu)|f16|f32|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n| ⚠️ [ffpa_mma_stages_split_q_L3_mixed_acc](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn\u002Fblob\u002Fmain\u002Fcsrc\u002Fcuffpa\u002Fffpa_attn_F16F16F32_L3.cu)|f16|QK f32, PV f16|[link](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)|⭐️⭐️⭐️⭐️|\n\n💡注： 🤖[ffpa-attn](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn)：📚FFPA——另一种更快的Flash Prefill注意力机制，针对head dim > 256时具有O(1)🎉SRAM复杂度，比SDPA EA快**1.8倍~3倍**🎉：[📈L20 ~1.9倍↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-l20)，[📈 A30 ~1.8倍↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-a30)，[📈3080 ~2.9倍↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-3080)，[📈4090 ~2.1倍↑🎉](https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002Fffpa-attn?tab=readme-ov-file#L1-bench-4090)。\n\n\n\n### 📚 Triton内核（OpenAI Triton）⭐️⭐️⭐️ ([©️返回👆🏻](#cuda-kernel))\n\n\u003Cdiv id=\"triton-kernel\">\u003C\u002Fdiv>\n\n|📖 Triton内核| 📖 元素数据类型| 📖 累加器数据类型| 📖 文档 | 📖 难度 |\n|:---|:---|:---|:---|:---|\n| ✔️ [triton_vector_add_kernel](.\u002Fkernels\u002Fopenai-triton\u002Fvector-add\u002F)|all|all|[link](.\u002Fkernels\u002Fopenai-triton\u002Fvector-add\u002F)|⭐️⭐️|\n| ✔️ [triton_fused_softmax(多阶段)](.\u002Fkernels\u002Fopenai-triton\u002Ffused-softmax\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Ffused-softmax\u002F)|⭐️⭐️⭐️|\n| ✔️ [triton_fused_layer_norm(前向传播)](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|⭐️⭐️⭐️|\n| ✔️ [triton_fused_layer_norm(反向传播)](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Flayer-norm\u002F)|⭐️⭐️⭐️|\n| ✔️ [triton_merge_attn_states_kernel(配合CUDA)](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states\u002F)|f16\u002Fbf16\u002Ff32|f32|[link](.\u002Fkernels\u002Fopenai-triton\u002Fmerge-attn-states\u002F)|⭐️⭐️⭐️|\n\n### 📚 CUTLASS\u002FCuTe内核 ⭐️⭐️⭐️ ([©️返回👆🏻](#cuda-kernel))\n\n\u003Cdiv id=\"cutlass-kernel\">\u003C\u002Fdiv>\n\n|📖 CUTLASS\u002FCuTe内核| 📖 元素数据类型| 📖 累加器数据类型| 📖 文档 | 📖 难度 |\n|:---|:---|:---|:---|:---|\n| ✔️ [mat_transpose_cute](.\u002Fkernels\u002Fmat-transpose\u002Fmat_transpose_cute.cu)|f32|\u002F|[link](.\u002Fkernels\u002Fmat-transpose\u002F)|⭐️⭐️|\n| ✔️ [flash_attn_cute(朴素版)](.\u002Fkernels\u002Fflash-attn\u002Fcutlass\u002Fflash_attn_cute.cu)|f16|f32|[link](.\u002Fkernels\u002Fflash-attn\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_f16_cute_kernel](.\u002Fkernels\u002Fhgemv\u002Fhgemv_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_f16x8_cute_kernel](.\u002Fkernels\u002Fhgemv\u002Fhgemv_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemv_tensor_core_cute_kernel](.\u002Fkernels\u002Fhgemv\u002Fhgemv_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemv\u002F)|⭐️⭐️⭐️|\n| ✔️ [hgemm_mma_stages_swizzle{smem}...cute*](.\u002Fkernels\u002Fhgemm\u002Fcutlass\u002Fhgemm_mma_stage_tn_cute.cu)|f16|f16|[link](.\u002Fkernels\u002Fhgemm\u002F)|⭐️⭐️⭐️|\n| ✔️ [ws_hgemm_naive_cute_kernel](.\u002Fkernels\u002Fws-hgemm\u002Fnaive_ws_hgemm_sm8x.cu)|f16|f16|[link](.\u002Fkernels\u002Fws-hgemm\u002F)|⭐️⭐️⭐️|\n\n## 📖 100+ 高性能计算与分布式-技术博客\n\n\u003Cdiv id=\"my-blogs-part-1\">\u003C\u002Fdiv>\n\n### 📚 高性能计算与分布式-个人技术专栏 ([©️返回👆🏻](#contents))\n\n|📖 类型-标题|📖 作者| 📖 推荐 |\n|:---|:---|:---|\n| [[扩散推理]📖简短的2025年总结，写在Cache-DiT v1.2.1之际](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2001692370358539662)|@DefTruth|⭐️⭐️|\n| [[扩散推理]📖CacheDiT支持Z-Image分布式推理和缓存加速​​](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1978490962742374735)|@DefTruth|⭐️⭐️|\n| [[扩散推理]📖cache-dit支持FLUX.2分布式推理和Cache](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1977698505834379041)|@DefTruth|⭐️⭐️|\n| [[扩散推理]📖Cache加速-FoCa公式理解记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1952056591068144338)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖cache-dit: BlockAdapter支持HunyuanImage-2.1 Cache加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1950849526400263083)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖cache-dit + Qwen-Image-Lightning 实现 3.5 steps 推理!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1948696529180295613)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖cache-dit: Wan2.2-MoE 2.4x 推理加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1943976514321380955)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖cache-dit: Qwen-Image-Edit 2x 无损加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1941503245764792443)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖cache-dit: Qwen-Image 1.5x 无损加速!](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1938547315221705644)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖Cache加速-TaylorSeer算法简析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1937477466475197176)|@DefTruth|⭐️⭐️⭐|\n| [[扩散推理]📖DiT推理加速综述: Caching](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F711223667)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][基础]📖Triton极简入门: Triton Vector Add](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1902778199261291694)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][基础]📖Triton Fused Softmax Kernel详解: 从Python源码到PTX](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1899562146477609112)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][基础]📖vLLM Triton Merge Attention States Kernel详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1904937907703243110)|@DefTruth|⭐️⭐️⭐|\n| [[Triton编程][进阶]📖vLLM Prefix Prefill Triton Kernel图解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695799736)|@DefTruth|⭐️⭐️⭐️|\n| [[张量\u002F序列并行]📖序列并行: BPT、Ring-Attention及Striped-Attention笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F6456708235)|@DefTruth|⭐️⭐️⭐|\n| [[vLLM实践][算子]📖vLLM算子开发流程：”保姆级“详细记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1892966682634473987)|@DefTruth|⭐️⭐️⭐|\n| [[vLLM实践][万字]📖vLLM + DeepSeek-R1 671B 多机部署及修Bug笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F29950052712)|@DefTruth|⭐️⭐️⭐|\n| [[Attention优化]📖FFPA(Split-D): FA2无限HeadDim扩展，2x↑🎉 vs SDPA EA](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F13975660308)|@DefTruth|⭐️⭐️⭐️|\n| [[CUDA基础][开篇]📖LeetCUDA: v3.0 大升级-面试刷题不迷路](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F19862356369)|@DefTruth|⭐️⭐️⭐⭐️|\n| [[分布式训推][张量\u002F序列并行]📖图解DeepSpeed-Ulysses&Megatron-LM TP\u002FSP](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F5750410146)|@DefTruth|⭐️⭐️|\n| [[VLM推理优化][InternVL系列]📖InternLM2\u002F...\u002FInternVL1.5系列笔记: 核心点解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F702481058)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][TensorRT-LLM][5w字]📖TensorRT-LLM部署调优-指北](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F699333691)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][KV Cache优化]📖GQA\u002FYOCO\u002FCLA\u002FMLKV: 层内和层间KV Cache共享](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F697311739)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Prefill优化][万字]📖图解vLLM Automatic Prefix Caching: TTFT优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F693556044)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][Attention优化]📖图解:从Online-Softmax到FlashAttention V1\u002FV2\u002FV3](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F668888063)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][Decoding优化]📖原理&图解FlashDecoding\u002FFlashDecoding++](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F696075602)|@DefTruth|⭐️⭐️|\n| [[VLM推理优化][LLaVA系列]📖CLIP\u002FLLaVA\u002FLLaVA1.5\u002FVILA笔记: 核心点解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F683137074)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Attention优化][万字]📖TensorRT MHA\u002FMyelin vs FlashAttention-2](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F678873216)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][PTX汇编]📖CUDA 12 PTX汇编: PRMT指令详解-通用模式](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F660630414)|@DefTruth|⭐️|\n| [[LLM推理优化][PTX汇编]📖CUDA 12 PTX汇编: LOP3指令详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F659741469)|@DefTruth|⭐️|\n| [[LLM推理优化][CUDA][3w字]📖高频面试题汇总-大模型手撕CUDA](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F678903537)|@DefTruth|⭐️⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(00): 通俗易懂讲解-快速反量化算法](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657072856)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(01): PRMT指令详解及FT源码解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657070837)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(02): 快速反量化之INT8转BF16](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657073159)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][Weight Only]📖WINT8\u002F4-(03): LOP3指令详解及INT4转FP16\u002FBF16](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657073857)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖100+篇: 大模型推理各方向新发展整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F693680304)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖30+篇: LLM推理论文集-500页PDF](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F669777159)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖FlashDecoding++: 比FlashDecoding还要快！](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F665022589)|@DefTruth|⭐️|\n| [[LLM推理优化][LLM Infra整理]📖TensorRT-LLM开源，TensorRT 9.1也来了](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F662361469)|@DefTruth|⭐️|\n| [[LLM推理优化][LLM Infra整理]📖20+篇: LLM推理论文集-300页PDF](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F658091768)|@DefTruth|⭐️⭐️|\n| [[LLM推理优化][LLM Infra整理]📖PagedAttention论文新鲜出炉](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F617015570)|@DefTruth|⭐️|\n| [[推理部署][CV\u002FNLP]📖FastDeploy三行代码搞定150+ CV、NLP模型部署](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F581326442)|@DefTruth|⭐️|\n| [[推理部署][CV]📖如何在lite.ai.toolkit(3.6k+ stars)中增加您的模型？](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F523876625)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖美团 YOLOv6 ORT\u002FMNN\u002FTNN\u002FNCNN C++推理部署](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F533643238)|@DefTruth|⭐️⭐️|\n| [[推理部署][ONNX]📖ONNX推理加速技术文档-杂记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524023964)|@DefTruth|⭐️|\n| [[推理部署][TensorFlow]📖Mac源码编译TensorFlow C++指北](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524013615)|@DefTruth|⭐️|\n| [[推理部署][CV]📖1Mb!头部姿态估计: FSANet，一个小而美的模型(C++)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F447364201)|@DefTruth|⭐️|\n| [[推理部署][CV]📖opencv+ffmpeg编译打包全解指南](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F472115312)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖RobustVideoMatting视频抠图静态ONNX模型转换](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F459088407)|@DefTruth|⭐️|\n| [[推理部署][CV]📖190Kb!SSRNet年龄检测详细解读（含C++工程）](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F462762797)|@DefTruth|⭐️|\n| [[推理部署][CV]📖MGMatting(CVPR2021)人像抠图C++应用记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F464732042)|@DefTruth|⭐️|\n| [[推理部署][CV]📖超准确人脸检测(带关键点)YOLO5Face C++工程详细记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F461878005)|@DefTruth|⭐️⭐️|\n| [[推理部署][ORT]📖解决: ONNXRuntime(Python) GPU 部署配置记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F457484536)|@DefTruth|⭐️|\n| [[推理部署][CV]📖记录SCRFD(CVPR2021)人脸检测C++工程化(含docker镜像)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F455165568)|@DefTruth|⭐️⭐️|\n| [[推理部署][NCNN]📖野路子：记录一个解决onnx转ncnn时op不支持的trick](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F451446147)|@DefTruth|⭐️|\n| [[推理部署][CV]📖升级版NanoDet-Plus MNN\u002FTNN\u002FNCNN\u002FORT C++工程记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F450586647)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖超轻量级NanoDet MNN\u002FTNN\u002FNCNN\u002FORT C++工程记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F443419387)|@DefTruth|⭐️|\n| [[推理部署][CV]📖详细记录MGMatting之MNN、TNN和ORT C++移植](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F442949027)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖YOLOX NCNN\u002FMNN\u002FTNN\u002FONNXRuntime C++工程简记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F447364122)|@DefTruth|⭐️|\n| [[推理部署][TNN]📖手动修改YoloX的tnnproto记录-TNN](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F425668734)|@DefTruth|⭐️|\n| [[推理部署][ORT]📖全网最详细 ONNXRuntime C++\u002FJava\u002FPython 资料！](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F414317269)|@DefTruth|⭐️|\n| [[推理部署][CV]📖RobustVideoMatting: C++工程化记录-实现篇](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F413280488)|@DefTruth|⭐️⭐️|\n| [[推理部署][CV]📖RobustVideoMatting: C++工程化记录-应用篇](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F412491918)|@DefTruth|⭐️⭐️|\n| [[推理部署][ORT]📖ONNXRuntime C++ CMake 工程分析及编译](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F411887386)|@DefTruth|⭐️⭐️|\n| [[推理部署][ORT]📖如何使用ORT C++ API处理NCHW和NHWC输入？](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524230808)|@DefTruth|⭐️|\n| [[推理部署][TNN]📖tnn-convert搭建简记-YOLOP转TNN](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F431418709)|@DefTruth|⭐️|\n| [[推理部署][CV]📖YOLOP ONNXRuntime C++工程化记录](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F411651933)|@DefTruth|⭐️⭐️|\n| [[推理部署][NCNN]📖超有用NCNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449765328)|@DefTruth|⭐️|\n| [[推理部署][MNN]📖超有用MNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449761992)|@DefTruth|⭐️|\n| [[推理部署][TNN]📖超有用TNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449769615)|@DefTruth|⭐️|\n| [[推理部署][ONNX]📖超有用ONNX参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449773663)|@DefTruth|⭐️|\n| [[推理部署][ONNX]📖超有用ONNX模型结构参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449775926)|@DefTruth|⭐️|\n| [[推理部署][OpenCV-DNN]📖超有用OpenCV-DNN参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449778377)|@DefTruth|⭐️|\n| [[推理部署][Tensorflow]📖超有用Tensorflow C++工程化知识点](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449788027)|@DefTruth|⭐️|\n| [[推理部署][模型转换]📖深度学习模型转换资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449759361)|@DefTruth|⭐️|\n| [[技术随笔][C++][CMake]📖超有用CMake参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F449779892)|@DefTruth|⭐️⭐️|\n| [[技术随笔][C++] [3W字]📖静态链接和静态库实践指北-原理篇](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F595527528)|@DefTruth|⭐️⭐️⭐️|\n| [[技术随笔][C++]📖Mac下C++内存检查指北(Valgrind VS Asan)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F508470880)|@DefTruth|⭐️|\n| [[技术随笔][CV]📖torchlm: 人脸关键点检测库](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F467211561)|@DefTruth|⭐️⭐️|\n| [[技术随笔][ML]📖《统计学习方法-李航: 笔记-从原理到实现-基于R》](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F684885595)|@DefTruth|⭐️⭐️|\n| [[技术随笔][Git]📖如何优雅地git clone和git submodule？](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F639136221)|@DefTruth|⭐️|\n| [[技术随笔][3D]📖人脸重建3D参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524034741)|@DefTruth|⭐️|\n| [[技术随笔][3D]📖BlendShapes参考资料整理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F524036145)|@DefTruth|⭐️|\n| [[技术随笔][3D]📖从源码安装Pytorch3D详细记录及学习资料](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F512347464)|@DefTruth|⭐️|\n| [[技术随笔][ML]📖200页:《统计学习方法：李航》笔记 -从原理到实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F461520847)|@DefTruth|⭐️⭐️|\n\n### 📚 高性能计算与分布式——技术博客推荐 ([©️返回👆🏻](#contents))\n\n\u003Cdiv id=\"other-blogs\">\u003C\u002Fdiv>\n\n💡说明: 本小节整理一些自己比较喜欢的文章。欢迎大家提PR推荐更多优秀的文章！\n\n|📖 类型-标题|📖 作者| 📖 推荐 |\n|:---|:---|:---|\n| [[cute系列详解][入门]📖cutlass cute 101](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F660379052)|@朱小霖|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖CUTLASS 2.x & CUTLASS 3.x Intro 学习笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F710516489)|@BBuf|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：tiled copy](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1930389542784964333)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：tiled mma](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1937145378446226159)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：Layout Compose & Inverse](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1962625273636845008)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程: Layout Product & Divide](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1971945267294111573)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给大家看的 CuTe 教程：TMA Copy](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2003198909405763007)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][入门]📖写给进阶开发的 CuTe 笔记：permutationMNK 参数](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1973526710105419953)|@竹熙佳处|⭐️⭐️⭐️|\n| [[cute系列详解][Layout]📖cute 之 Layout](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F661182311)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Layout]📖cute Layout 的代数和几何解释](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F662089556)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Tensor]📖cute 之 Tensor](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F663093816)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][MMA]📖cute 之 MMA抽象](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F663092747)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Copy]📖cute 之 Copy抽象](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F666232173)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cute 之 Swizzle](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F671419093)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cute Swizzle细谈](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F684250988)|@进击的Killua|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cutlass swizzle机制解析（一）](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F710337546)|@Titus|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖cutlass swizzle机制解析（二）](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F711398930)|@Titus|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖CUDA避免smem bank conflict的swizzle机制解析](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F4746910252)|@frankshi|⭐️⭐️⭐️|\n| [[cute系列详解][Swizzle]📖布局代数实战：Swizzle自动推导](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1941306442683515068)|@melonedo|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖cute 之 简单GEMM实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F667521327)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖cute 之 GEMM流水线](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F665082713)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖cute 之 高效GEMM实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F675308830)|@reed|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM流水线: single\u002Fmulti-stage、pipeline](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712451053)|@Titus|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM细节分析(一): ldmatrix的选择](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F702818267)|@Anonymous|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM细节分析(二): TiledCopy与cp.async](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F703560147)|@Anonymous|⭐️⭐️⭐️|\n| [[cute系列详解][GEMM]📖GEMM细节分析(三): Swizzle\u003CB,M,S>参数取值](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F713713957)|@Anonymous|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖Hopper Mixed GEMM的CUTLASS实现笔记](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F714378343)|@BBuf|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖CUTLASS CuTe实战(一): 基础](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F690703999)|@进击的Killua|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖CUTLASS CuTe实战(二): 应用](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F692078624)|@进击的Killua|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖FlashAttention fp8实现（ada架构)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712314257)|@shengying.wei|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖FlashAttention 笔记: tiny-flash-attention解读](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F708867810)|@shengying.wei|⭐️⭐️⭐️|\n| [[cute系列详解][实践]📖使用cutlass cute复现flash attention](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F696323042)|@66RING|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖cutlass 基本认知](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F677616101)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖cutlass 软件架构](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F678915618)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖CUTLASS 基础介绍](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F671324125)|@进击的Killua|⭐️⭐️⭐️|\n| [[cutlass教程][入门]📖乱谈CUTLASS GTC2020 SLIDES](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F674693873)|@zzk again|⭐️⭐️⭐️|\n| [[cutlass教程][深入]📖cutlass block swizzle 和 tile iterator](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F679929705)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][深入]📖cutlass bank conflict free的smem layout](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F681966685)|@JoeNomad|⭐️⭐️⭐️|\n| [[cutlass教程][深入]📖cutlass 多级流水线](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F687397095)|@JoeNomad|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-前言](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F686198447)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-寄存器](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F688616037)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-Load和Cache](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F692445145)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-浮点运算](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695667044)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-整数运算](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F700921948)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-比特和逻辑操作](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712356884)|@reed|⭐️⭐️⭐️|\n| [[GPU指令集架构][精解]📖NVidia GPU指令集架构-Warp级和Uniform操作](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F712357647)|@reed|⭐️⭐️⭐️|\n| [[CUDA优化][入门]📖CUDA 入门的正确姿势：how-to-optimize-gemm](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F478846788)|@白牛|⭐️⭐️⭐️|\n| [[CUDA优化][入门]📖CUDA（一）：CUDA 编程基础](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F645330027)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][入门]📖CUDA（二）：GPU的内存体系及其优化指南](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F654027980)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA（三）：通用矩阵乘法：从入门到熟练](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F657632577)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(1)：LayerNorm 算子的 CUDA 实现与优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F694974164)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(2)：SoftMax算子的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695307283)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(3)：Cross Entropy 的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695594396)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(4)：AdamW 优化器的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695611950)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(5)：激活函数与残差连接的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695703671)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(6)：embedding 层与 LM head 层的 CUDA 实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695785781)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(7)：self-attention 的 CUDA 实现及优化 (上)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F695898274)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖ops(8)：self-attention 的 CUDA 实现及优化 (下)](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F696197013)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA（四）：使用 CUDA 实现 Transformer 结构](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F694416583)|@紫气东来|⭐️⭐️⭐️|\n| [[CUDA优化][Copy]📖Async Copy及Memory Barrier指令的功能与实现](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F685168850)|@Frank Wang|⭐️⭐️⭐️|\n| [[CUDA优化][GEMV]📖深入浅出GPU优化系列：gemv优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F494144694)|@有了琦琦的棍子|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA element-wise 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1888630735520391519)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA transpose 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1899760505733756129)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA reduce 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1905661893739283464)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[CUDA优化][实践]📖CUDA GEMM 算子详解](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F1910636263666610461)|@懒蚂蚁呀不嘿|⭐️⭐️⭐️|\n| [[Tensor Cores]📖Nvidia Tensor Core初探](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F620185229)|@木子知|⭐️⭐️⭐️|\n| [[Tensor Cores]📖Nvidia Tensor Core-WMMA API编程入门](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F620766588)|@木子知|⭐️⭐️⭐️|\n| [[Tensor Cores]📖Nvidia Tensor Core-MMA PTX编程入门](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F621855199)|@木子知|⭐️⭐️⭐️|\n| [[Tensor Cores]📖CUDA Ampere Tensor Core HGEMM 矩阵乘法优化](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F555339335)|@nicholaswilde|⭐️⭐️⭐️|\n| [[GPU通信架构][精解]📖NVIDIA GPGPU（四）- 通信架构](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F680262016)|@Bruce|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: 介绍](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F9418379234)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: TorchDynamo](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F9640728231)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: AOTAutograd](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F9997263922)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: TorchInductor](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F11224299472)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖Torch.compile流程解析: 算子融合](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F21053905491)|@StarCap|⭐️⭐️⭐️|\n| [[torch.compile][实践]📖Torch.compile使用指南](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F620163218)|@jhang|⭐️⭐️⭐️|\n| [[torch.compile][实践]📖Torch.compile详细示例解析教程](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F855291863)|@Bbuf|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖一文搞懂TorchDynamo原理](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F630933479)|@吾乃阿尔法|⭐️⭐️⭐️|\n| [[torch.compile][原理]📖理解torch.compile基本原理和使用方式](https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F12712224407)|@俯仰|⭐️⭐️⭐️|\n\n## ©️许可证 ([©️返回👆🏻](#contents))\n\n\u003Cdiv id=\"License\">\u003C\u002Fdiv>\n\nGNU 通用公共许可证 v3.0\n\n## 🎉贡献 ([©️返回👆🏻](#contents))\n\n\u003Cdiv id=\"contribute\">\u003C\u002Fdiv>\n\n如何贡献？请给本仓库点个星，或查看 [🌤🌤CONTRIBUTE🎉🎉](.\u002FCONTRIBUTE.md)。\n\n\u003Cdiv align='center'>\n\u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#xlite-dev\u002FLeetCUDA&Date\">\n \u003Cpicture>\n   \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_f9b66eb04f5d.png&theme=dark\" \u002F>\n   \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_f9b66eb04f5d.png\" \u002F>\n   \u003Cimg width=400 height=300 alt=\"Star History Chart\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_readme_f9b66eb04f5d.png\" \u002F>\n \u003C\u002Fpicture>\n\u003C\u002Fa>\n\u003C\u002Fdiv>\n\n## 📖 参考文献 ([©️返回👆🏻](#contents))\n\u003Cdiv id=\"ref\">\u003C\u002Fdiv>\n\n- [flash-attention-minimal](https:\u002F\u002Fgithub.com\u002Ftspeterkim\u002Fflash-attention-minimal)\n- [tiny-flash-attention](https:\u002F\u002Fgithub.com\u002F66RING\u002Ftiny-flash-attention)\n- [cute-gemm](https:\u002F\u002Fgithub.com\u002Freed-lau\u002Fcute-gemm)\n- [cutlass_flash_atten_fp8](https:\u002F\u002Fgithub.com\u002Fweishengying\u002Fcutlass_flash_atten_fp8)\n- [cuda_learning](https:\u002F\u002Fgithub.com\u002Fifromeast\u002Fcuda_learning)\n- [cuda_hgemm](https:\u002F\u002Fgithub.com\u002FBruce-Lee-LY\u002Fcuda_hgemm)\n- [cuda-tensorcore-hgemm](https:\u002F\u002Fgithub.com\u002Fnicolaswilde\u002Fcuda-tensorcore-hgemm)\n- [How_to_optimize_in_GPU](https:\u002F\u002Fgithub.com\u002FLiu-xiandong\u002FHow_to_optimize_in_GPU\u002Ftree\u002Fmaster\u002Fsgemv)\n- [how-to-optim-algorithm-in-cuda](https:\u002F\u002Fgithub.com\u002FBBuf\u002Fhow-to-optim-algorithm-in-cuda)\n- [cute_gemm](https:\u002F\u002Fgithub.com\u002Fweishengying\u002Fcute_gemm)\n- [cutlass](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass)","# LeetCUDA 快速上手指南\n\nLeetCUDA 是一套面向初学者的现代 CUDA 学习笔记，涵盖 Tensor Cores、多种精度（TF32\u002FF16\u002FBF16\u002FF8）、200+ CUDA Kernel 实现以及高性能 HGEMM 和 FlashAttention 优化方案。所有示例均基于 PyTorch 进行封装和测试。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04\u002F22.04) 或 Windows (WSL2)。\n*   **GPU**: 支持 CUDA 的 NVIDIA GPU (推荐 Ampere 架构及以上，如 RTX 30\u002F40 系列，A100, L20 等，以发挥 Tensor Cores 性能)。\n*   **CUDA Toolkit**: 版本 >= 11.8 (建议 12.x)。\n*   **Python**: 版本 >= 3.8。\n*   **PyTorch**: 需安装与 CUDA 版本匹配的 PyTorch。\n*   **编译器**: g++ 或 nvcc。\n\n**前置依赖安装：**\n\n```bash\n# 建议使用国内镜像源加速安装 (如清华源)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121\npip install numpy pytest\n```\n\n## 安装步骤\n\n克隆仓库并配置本地开发环境：\n\n```bash\n# 1. 克隆项目\ngit clone https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA.git\ncd LeetCUDA\n\n# 2. (可选) 创建虚拟环境\npython -m venv venv\nsource venv\u002Fbin\u002Factivate  # Windows: venv\\Scripts\\activate\n\n# 3. 安装项目依赖 (如果根目录有 requirements.txt)\n# pip install -r requirements.txt \n# 若无 requirements.txt，确保已安装上述“环境准备”中的依赖即可直接运行 kernels 目录下的代码\n```\n\n## 基本使用\n\nLeetCUDA 的核心内容位于 `kernels` 目录，包含从入门到精通的 CUDA Kernel 实现。每个示例通常由三部分组成：\n1.  **CUDA 源码** (`.cu` 或 `.cuh`)：核心算法实现。\n2.  **PyTorch 绑定** (`.cpp` + `setup.py` 或直接在 Python 中通过 `torch.utils.cpp_extension` 加载)：用于在 Python 中调用。\n3.  **测试脚本** (`.py`)：验证正确性和性能。\n\n### 示例：运行一个简单的 CUDA Kernel\n\n假设我们要尝试 `kernels` 目录下的一个基础示例（具体文件名请以仓库实际结构为准，以下为通用流程）：\n\n1.  **查看示例代码**：\n    进入 `kernels\u002Feasy` 或 `kernels\u002Fhgemm` 目录，找到对应的 `.cu` 文件和测试脚本。\n\n2.  **编译与运行**：\n    大多数示例提供了直接的 Python 测试脚本，会自动处理编译过程。\n\n    ```bash\n    # 进入对应的 kernel 目录，例如 hgemm\n    cd kernels\u002Fhgemm\n    \n    # 运行测试脚本 (通常名为 test_*.py 或 bench_*.py)\n    python test_hgemm_mma.py\n    ```\n\n    如果示例需要手动编译扩展，通常遵循以下模式：\n\n    ```python\n    # 在 Python 测试文件中\n    from torch.utils.cpp_extension import load\n    \n    # 加载 CUDA 扩展\n    cuda_module = load(\n        name=\"my_kernel\",\n        sources=[\"my_kernel.cu\", \"binding.cpp\"],\n        extra_cuda_cflags=[\"-O3\", \"--use_fast_math\"],\n        verbose=True\n    )\n    \n    # 调用 kernel\n    output = cuda_module.my_function(input_tensor)\n    ```\n\n3.  **性能基准测试**：\n    对于 HGEMM 或 FlashAttention 等高级示例，运行基准测试脚本对比 cuBLAS 或标准 FlashAttention 的性能：\n\n    ```bash\n    # 运行 HGEMM 基准测试\n    python bench_hgemm.py\n    \n    # 运行 FlashAttention MMA 基准测试\n    python bench_flash_attn_mma.py\n    ```\n\n> **提示**：项目中包含了从 `Easy` 到 `Hard++` 不同难度的 200+ Kernel 示例，建议按照 `README` 中的目录顺序循序渐进学习。对于涉及 Tensor Cores (WMMA\u002FMMA) 的高级示例，请确保你的 GPU 架构支持相应指令集。","某 AI 初创公司的算法工程师团队正致力于将自研的大语言模型推理速度提升 30%，以满足实时对话服务的低延迟要求，他们决定对核心的矩阵乘法与注意力机制算子进行深度定制优化。\n\n### 没有 LeetCUDA 时\n- **学习门槛极高**：团队成员需从零摸索复杂的 Tensor Core 编程与 PTX 指令集，缺乏系统性的现代 CUDA 学习笔记，导致上手周期长达数周。\n- **性能调优困难**：自行编写的 HGEMM（半精度通用矩阵乘法）内核效率低下，仅能达到 cuBLAS 库 60%-70% 的算力峰值，成为推理瓶颈。\n- **显存带宽受限**：在处理大维度（Large HeadDim）的注意力机制时，无法有效利用共享内存优化，导致显存访问延迟高，难以复现 FlashAttention-2 的高性能。\n- **试错成本高昂**：缺乏经过验证的 200+ 高质量 CUDA Kernel 参考代码，每次修改底层逻辑都需反复调试，严重拖慢迭代进度。\n\n### 使用 LeetCUDA 后\n- **快速掌握核心**：借助 LeetCUDA 提供的现代化教程与 PyTorch 绑定示例，团队在一周内便掌握了 TF32\u002FF16\u002FBF16 数据类型及 Tensor Core 的高效用法。\n- **逼近硬件极限**：直接复用并微调 LeetCUDA 中的 HGEMM 实现，利用 WMMA 与 CuTe API 成功将矩阵乘法性能提升至 cuBLAS 的 98%-100%。\n- **突破维度限制**：采用 LeetCUDA 中基于纯 MMA PTX 优化的 FlashAttention 变体，轻松支持 D=320~1024 的大维度注意力计算，显著降低 SRAM 复杂度。\n- **加速落地验证**：基于仓库内 200+ 个现成的高质量 Kernel 模板进行二次开发，大幅减少底层编码时间，让团队能专注于业务逻辑的创新。\n\nLeetCUDA 通过提供工业级性能的算子模板与现代化学法路径，帮助团队将原本需要数月完成的底层优化工作压缩至数周，成功实现了模型推理速质的飞跃。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fxlite-dev_LeetCUDA_d369fb27.png","xlite-dev","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fxlite-dev_1feaae80.png","Develop ML\u002FAI toolkits and ML\u002FAI\u002FCUDA Learning resources.",null,"https:\u002F\u002Fgithub.com\u002Fxlite-dev",[79,83,87,91,95,99],{"name":80,"color":81,"percentage":82},"Cuda","#3A4E3A",88.3,{"name":84,"color":85,"percentage":86},"Python","#3572A5",9.4,{"name":88,"color":89,"percentage":90},"C++","#f34b7d",2.1,{"name":92,"color":93,"percentage":94},"Makefile","#427819",0.2,{"name":96,"color":97,"percentage":98},"Shell","#89e051",0,{"name":100,"color":101,"percentage":98},"HTML","#e34c26",10301,1050,"2026-04-18T16:51:17","GPL-3.0",4,"未说明","必需 NVIDIA GPU（支持 Tensor Cores），文中测试涉及 L20, RTX 4090, RTX 3080 Laptop，需安装 CUDA 工具链以编译 PTX\u002FMMA 指令",{"notes":110,"python":107,"dependencies":111},"该项目主要是 CUDA 学习笔记和内核实现集合（包含 HGEMM, FlashAttention 等），核心代码为 C++\u002FCUDA。虽然提到结合 PyTorch 使用（通常指通过 Python 绑定调用），但 README 未明确指定具体的 Python 版本、PyTorch 版本或操作系统要求。运行需要能够编译自定义 CUDA 内核的环境。部分高级功能依赖纯 MMA PTX 指令，对显卡架构有一定要求（如 Ampere 及以上以支持 TF32\u002FBF16\u002FF8 等特性）。",[112,113,114,115],"PyTorch","CUDA Toolkit","Triton (部分内核)","CUTLASS (部分内核)",[14],[118,119,120,121,122,123,124,125,126,127,128,129],"cuda","hgemm","cuda-kernels","cuda-toolkit","flash-attention","cuda-demo","learn-cuda","leet-cuda","cuda-kernel","cuda-library","cuda-12","cuda-cpp","2026-03-27T02:49:30.150509","2026-04-19T09:15:05.546982",[133,138,143,148,153,158,163],{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},41967,"为什么在 elementwise 算子中不直接使用 pack_c 的结果，而是重新读取数据进行计算？","pack_c 中只包含部分结果。例如在 10 个数据中，前 8 个可能在 pack 操作中处理了，但后 2 个没有。这需要结合边界判断条件 `if((idx + 7) \u003C N)` 来看。虽然 pack_a\u002Fb 的读取可能越界（out_of_bound），但越界的部分会被后续的边界检查门控（gate）住，不会参与实际计算，因此需要重新读取以确保逻辑正确。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F381",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},41968,"为什么使用指针操作 float4 (reinterpret_cast\u003Cfloat4*>) 的性能比直接使用 float4 寄存器变量差很多？","使用指针方式（如 `float4* reg_a = reinterpret_cast\u003Cfloat4*>(a + idx)`）时，定义的变量只是存放地址的寄存器，实际操作时每次访问成员（x, y, z, w）都会直接去全局显存（global memory）读取数据，导致多次全局内存访问。而直接使用 float4 变量会将数据一次性从全局内存加载到寄存器中，后续运算均在寄存器上进行，只需一次全局内存访问，因此性能更高。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F341",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},41969,"如何配置环境以开始编写和测试第一个 CUDA Kernel（如 elementwise）？","首先需要配置好基础的 GPU 环境，包括安装 NVIDIA 驱动和 CUDA Toolkit。建议先确保 PyTorch GPU 版本能正常运行，可以通过以下代码验证：\n```python\nimport torch\nprint(torch.cuda.is_available()) # 应输出 True\n```\n如果 PyTorch 无法识别 GPU，请先参考相关教程配置好基础环境，然后再运行 LeetCUDA 项目，否则没有太大意义。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F362",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},41970,"在使用 WMMA 进行 SGEMM 计算时，为什么会出现 2-3 的数值误差？","这是因为 TF32 精度的 kernel 不能直接对 Float32 数据进行操作，需要先进行数据类型转换。必须在调用 kernel 前，先将 float32 数据转换为 tf32 格式。可以参考如下前置转换代码：\n```C++\n\u002F\u002F 将数据从 float32 转换为 tf32\nf32x4_tf32x4_kernel\u003C\u003C\u003C((Na + T * 4 - 1)\u002F(T * 4)), T>>>(\n    reinterpret_cast\u003Cfloat*>(a.data_ptr()),\n    reinterpret_cast\u003Cfloat*>(a.data_ptr()),\n    Na);\n```\n完成类型转换后再执行后续的矩阵乘法计算即可消除误差。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F277",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},41971,"当前的 softmax 实现是否支持跨 Block 的全局规约（Global Reduction）？","目前项目中的 `block_reduce_max_f32` 和 `block_reduce_sum_f32` 等函数仅实现了 Block 内部的规约，每个线程只能获取所属 Block 内的规约结果，无法获取全局规约结果。对于规约维度很大、需要分多个 Block 处理的场景，目前尚未实现完整的跨 Block 全局规约 Kernel。欢迎社区提交 PR 来补充这一功能。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F392",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},41972,"如何在 sigmoid 等向量化实现中正确处理边界条件（如 `idx+0` vs `idx+3`）？","在向量化实现中，通常采用两种策略处理边界：一是在 Kernel 外部对数据进行 Padding（填充），使数据长度刚好是向量大小的整数倍；二是在 Kernel 内部增加判断逻辑，例如 `if (idx \u003C N)` 执行向量化过程，否则执行非向量化（scalar）过程来处理剩余不足一个向量的数据。具体的优化更新可关注项目后续的代码迭代。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F316",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},41973,"如何使用 Nsight Compute (ncu) 正确生成性能分析文件？","在使用 `ncu` 命令生成分析文件时，需要注意参数格式。正确的命令格式应包含 `-f true` 参数以强制覆盖已存在的文件。例如：\n```bash\nncu -o relu.prof -f true relu.bin\nncu -o elementwise.prof -f true elementwise.bin\n```\n如果省略 `-f true` 或格式不正确，可能会导致命令执行失败或无法生成预期的分析文件。","https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fissues\u002F377",[169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249,254,259,264],{"id":170,"version":171,"summary_zh":172,"released_at":173},333972,"v3.0.19","## 变更内容\n* 修复（sgemm）：销毁 cuBLAS 句柄，以避免内存分配失败，由 @lnxtree 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F415 中完成\n\n## 新贡献者\n* @lnxtree 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F415 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.18...v3.0.19","2026-04-05T14:33:13",{"id":175,"version":176,"summary_zh":177,"released_at":178},333973,"v3.0.18","## 变更内容\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F410 中更新 README.md\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F412 中更新 README.md\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.17...v3.0.18","2026-03-20T01:29:22",{"id":180,"version":181,"summary_zh":182,"released_at":183},333974,"v3.0.17","## 变更内容\n* 更新由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F395 中提交的 cache-dit 发布新闻条目\n* 杂项：更新 .gitignore，防止 Git 跟踪用户的本地设置，由 @KarhouTam 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F400 中完成\n* 修复：移除冗余的 CP_ASYNC_WAIT_GROUP，由 @YangFan0918 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F401 中完成\n* 将设备获取方式改为使用 torch.device，因为 Triton 3.0…，由 @clveryang 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F402 中完成\n* 功能：添加 lmyybh 的知乎博客，由 @lmyybh 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F403 中完成\n* 修复：更新 mat_transpose_f32_row2col2d_kernel，由 @AgainstEntropy 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F404 中完成\n* 更新 README.md，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F406 中完成\n* 杂项：自动更新子模块，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F407 中完成\n* 日常维护：更新博客索引，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F408 中完成\n\n## 新贡献者\n* @KarhouTam 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F400 中完成了首次贡献\n* @YangFan0918 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F401 中完成了首次贡献\n* @clveryang 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F402 中完成了首次贡献\n* @lmyybh 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F403 中完成了首次贡献\n* @AgainstEntropy 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F404 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.16...v3.0.17","2026-02-13T07:35:38",{"id":185,"version":186,"summary_zh":187,"released_at":188},333975,"v3.0.16","## 变更内容\n* 杂项：@DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F388 中修复了格式问题\n* 功能：@DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F389 中添加了 FoCa 的知乎博客链接\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.15...v3.0.16","2025-10-17T03:04:57",{"id":190,"version":191,"summary_zh":192,"released_at":193},333976,"v3.0.15","## 变更内容\n* 杂项：由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F369 中自动更新子模块\n* softmax.cu 拼写错误：由 @napleon-liu 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F371 中修复\n* 功能：taylorseer 的知乎博客：由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F373 中添加\n* 缓存相关：Qwen-Image 1.5 倍性能提升的博客：由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F374 中添加\n* 更新 README.md：由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F375 中完成\n* 修复：mat_transpose.cu 中的括号错误：由 @wxwxwwxxx 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F376 中修复\n* 修复：为 elementwise_add_f16x8_pack_kernel 添加尾部情况处理：由 @lhycms 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F380 中完成\n* 功能：添加 Wan2.2-MoE 2.4 倍加速的知乎博客：由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F383 中添加\n* 功能：Qwen-Image-Lightning 和 HunyuanImage-2.1 加速博客：由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F387 中添加\n\n## 新贡献者\n* @napleon-liu 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F371 中完成了首次贡献\n* @wxwxwwxxx 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F376 中完成了首次贡献\n* @lhycms 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F380 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.14...v3.0.15","2025-09-15T02:22:56",{"id":195,"version":196,"summary_zh":197,"released_at":198},333977,"v3.0.14","## 变更内容\n* 功能新增：@kitecats 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F366 中添加了一个适用于 sm8x 的朴素 ws-hgemm 实现。\n* 新增：@DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F368 中添加了可爱的 ws-hgemm 索引。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.13...v3.0.14","2025-08-01T06:47:01",{"id":200,"version":201,"summary_zh":202,"released_at":203},333978,"v3.0.13","## 变更内容\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F342 中置顶 HelloGitHub\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F343 中更新 README.md\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F344 中更新 README.md\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F345 中修复注释\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F346 中修复注释\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F347 中修复 all2all 示例\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F348 中修复 softmax.cu\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F349 中修复 CONTRIBUTE.md\n* 新特性：添加 torch.compile 相关博客，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F350 中实现\n* 新特性：Triton 层归一化，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F351 中实现\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F354 中修复 flash-attn 注释\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.12...v3.0.13","2025-06-29T02:59:21",{"id":205,"version":206,"summary_zh":207,"released_at":208},333979,"v3.0.12","## 变更内容\n* 功能新增：添加 DBCache 发布公告，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F338 中完成\n* 新增 out_f32x4_shared_bcf_merge_write_row2col(2d)，由 @Beatlesso 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F339 中完成\n\n## 新贡献者\n* @Beatlesso 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F339 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.11...v3.0.12","2025-06-17T09:56:48",{"id":210,"version":211,"summary_zh":212,"released_at":213},333980,"v3.0.11","## 变更内容\n* 功能新增：@kitecats 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F331 中添加了可爱的 hgemv 实现\n* 更新 README.md：@DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F333 中完成了更新\n* 功能新增：@kitecats 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F334 中添加了可爱的无缓存矩阵转置向量化实现\n* 错误修复：@hebangwen 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F335 中修复了 layernorm 和 rmsnorm 的 f16 溢出问题\n* 错误修复：@lixiaoquan 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F336 中修复了一个编译错误\n\n## 新贡献者\n* @hebangwen 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F335 中完成了首次贡献\n* @lixiaoquan 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F336 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.10...v3.0.11","2025-06-11T05:57:28",{"id":215,"version":216,"summary_zh":217,"released_at":218},333981,"v3.0.10","## 变更内容\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F322 中更新 README.md\n* 由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F323 中更新 README.md\n* 修复：缺少源文件，由 @botbw 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F325 中完成\n* 使用 128 位数据加载，由 @kitecats 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F326 中实现\n* 创建 FUNDING.yml 文件，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F327 中完成\n* 添加 open-collective 徽章，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F328 中完成\n* 更新 open-collective 贡献者徽章，由 @DefTruth 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F329 中完成\n\n## 新贡献者\n* @kitecats 在 https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F326 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.9...v3.0.10","2025-06-03T02:13:43",{"id":220,"version":221,"summary_zh":222,"released_at":223},333982,"v3.0.9","## What's Changed\r\n* feat: add some torch.distributed examples by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F313\r\n* feat: add some torch.distributed examples by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F315\r\n* feat: add a naive CuTe flash-attn by @botbw in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F314\r\n* fix(kernels): correct typo in LayerNorm kernel at line 73 110 346 443 by @nxdxml in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F317\r\n* misc: manually update submodules by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F318\r\n* chore: add naive cute flash-attn index by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F319\r\n* add triton merge_attn_states zhihu blog by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F320\r\n\r\n## New Contributors\r\n* @nxdxml made their first contribution in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F317\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.8...v3.0.9","2025-05-12T01:53:03",{"id":225,"version":226,"summary_zh":227,"released_at":228},333983,"v3.0.8","## What's Changed\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F308\r\n* misc: add triton vector add zhihu blog by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F310\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F311\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.7...v3.0.8","2025-05-06T06:23:01",{"id":230,"version":231,"summary_zh":232,"released_at":233},333984,"v3.0.7","## What's Changed\r\n* Update mat-transpose\u002FREADME.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F300\r\n* feat: add triton fused-softmax by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F301\r\n* misc: add pre-commit & format by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F302\r\n* misc: add developer guide by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F303\r\n* misc: add developer guide by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F304\r\n* misc: fix typo by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F305\r\n* Update CONTRIBUTE.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F306\r\n* feat: update pre-commit max-length=80 by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F307\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.6...v3.0.7","2025-04-28T06:02:16",{"id":235,"version":236,"summary_zh":237,"released_at":238},333985,"v3.0.6","## What's Changed\r\n* misc: update merge_attn_states unit tests by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F281\r\n* misc: update merge_attn_states docs by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F282\r\n* misc: update merge_attn_states docs by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F283\r\n* feat: remove merge_attn_states kernel help func by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F284\r\n* misc: remove static flag for to\u002Ffrom_float by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F285\r\n* misc: add new zhihu tech blog link by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F287\r\n* misc: add debug flag for ncu profile by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F288\r\n* bugfix: corrected theta calculation in RoPE CUDA kernel by @jiaau in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F290\r\n* docs: Add my ring-attention zhihu blog by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F291\r\n* Add simple CuTe mat-transpose implementations by @botbw in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F292\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F296\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F297\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F298\r\n* Rename to LeetCUDA by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F299\r\n\r\n## New Contributors\r\n* @jiaau made their first contribution in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F290\r\n* @botbw made their first contribution in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fpull\u002F292\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FLeetCUDA\u002Fcompare\u002Fv3.0.5...v3.0.6","2025-04-26T06:51:39",{"id":240,"version":241,"summary_zh":242,"released_at":243},333986,"v3.0.5","## What's Changed\r\n* [Misc] Automated submodule update by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F261\r\n* Update README.md by @tpoisonooo in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F264\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F265\r\n* bugfix: only export per token softmax kernels by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F266\r\n* misc: update vllm latest slides by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F267\r\n* feat: add triton vector_add kernel by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F268\r\n* feat: add triton merge_attn_states kernel by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F269\r\n* feat: add cuda merge_attn_states kernel by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F270\r\n* feat: update cuda merge_attn_states kernel by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F271\r\n* misc: dispatch CUDA merge_attn_states by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F273\r\n* misc: add triton kernel index by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F274\r\n* Fix mistake on mat trans 2d when init grid. by @bear-zd in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F275\r\n* misc: update cuda merge_attn_states kernel by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F276\r\n* kernel: optimize merge_attn_states CUDA kernel dispatch by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F278\r\n* feat: optimize merge_attn_states thread block dispatch by @DefTruth in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F279\r\n\r\n## New Contributors\r\n* @tpoisonooo made their first contribution in https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fpull\u002F264\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fxlite-dev\u002FCUDA-Learn-Notes\u002Fcompare\u002Fv3.0.4...v3.0.5","2025-04-09T15:15:37",{"id":245,"version":246,"summary_zh":247,"released_at":248},333987,"v3.0.4","## What's Changed\r\n* [Docs] Add vLLM + DeepSeek-R1 671B deploy blog by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F259\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fcompare\u002Fv3.0.3...v3.0.4","2025-03-15T03:14:03",{"id":250,"version":251,"summary_zh":252,"released_at":253},333988,"v3.0.3","## What's Changed\r\n* [Misc] Automated submodule update by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F257\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fcompare\u002Fv3.0.2...v3.0.3","2025-03-04T04:14:08",{"id":255,"version":256,"summary_zh":257,"released_at":258},333989,"v3.0.2","## What's Changed\r\n* Fix typo in block_all_reduce.cu by @wplf in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F247\r\n* fix typo about enougth by @wplf in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F248\r\n* [FFPA] Add FFPA tech zhihu blog by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F252\r\n* [FFPA] Update FFPA(Split-D) blog title by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F253\r\n* [Misc] Automated submodule update by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F254\r\n\r\n## New Contributors\r\n* @wplf made their first contribution in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F247\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fcompare\u002Fv3.0.1...v3.0.2","2025-02-24T01:30:45",{"id":260,"version":261,"summary_zh":262,"released_at":263},333990,"v3.0.1","## What's Changed\r\n* [README] Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F240\r\n* [Bugfix] remove some error comments by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F241\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F242\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F243\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F244\r\n* [misc] Add hgemm-tensorcores-mma submodule✔️ by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F246\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fcompare\u002Fv3.0.0...v3.0.1","2025-02-06T12:08:50",{"id":265,"version":266,"summary_zh":267,"released_at":268},333991,"v3.0.0","## What's Changed\r\n* [README] Add cuffpa-py library News🔥 by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F216\r\n* [README] Update cuffpa-py library News🔥 by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F217\r\n* [README] Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F218\r\n* [README] Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F219\r\n* [Misc] fix ffpa-attn-mma bench links typo by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F220\r\n* [Misc] fix ffpa-attn-mma bench links typo by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F221\r\n* [Misc] fix ffpa-attn-mma bench links typo by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F222\r\n* [README] Update README by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F223\r\n* [README] Add 🤖ffpa-attn-mma D=512 bench by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F224\r\n* [submodule] Add ffpa-attn-mma submodule by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F225\r\n* [misc] Update ffpa-attn-mma & cutlass submodules by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F226\r\n* [FFPA] add ffpa-attn-mma kernels to lists by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F227\r\n* [Misc] Update ffpa-attn-mma submodule by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F228\r\n* [README] Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F229\r\n* Update README.md by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F230\r\n* [Misc] update ffpa-attn-mma submodule by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F231\r\n* [Misc] update ffpa-attn-mma submodule by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F233\r\n* [Misc] update ffpa-attn-mma submodule by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F234\r\n* [Misc] update ffpa-attn-mma submodule by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F236\r\n* [Release] Bump up to v3.0.0 (#237) by @DefTruth in https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fpull\u002F238\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FDefTruth\u002FCUDA-Learn-Notes\u002Fcompare\u002Fv2.6.15...v3.0.0","2025-01-22T10:08:22"]