[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-efeslab--Nanoflow":3,"tool-efeslab--Nanoflow":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":115,"forks":116,"last_commit_at":117,"license":79,"difficulty_score":118,"env_os":119,"env_gpu":120,"env_ram":121,"env_deps":122,"category_tags":133,"github_topics":134,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":141,"updated_at":142,"faqs":143,"releases":188},1319,"efeslab\u002FNanoflow","Nanoflow","A throughput-oriented high-performance serving framework for LLMs","NanoFlow 是一款专为大规模语言模型设计的高吞吐推理服务框架。它通过“设备内并行”把一次请求拆成极小的 nano-batch，让计算、内存和网络操作在同一 GPU 里重叠执行，从而把硬件吃干榨尽；再配合异步 CPU 调度，提前为下一轮推理准备 KV-cache 和 batch，进一步压缩空闲时间。实验显示，相比 TensorRT-LLM，NanoFlow 可把吞吐量最高提升 91%。\n\n如果你正在做 LLM 在线服务、需要压低成本或提高并发，NanoFlow 会是一个趁手的后端选项。它已支持 Llama2\u002F3、Qwen2 等主流模型，提供 C++ 后端与 Python 前端示例，方便开发者快速集成；研究人员也能用它复现论文结果、探索新的并行策略。对硬件调优感兴趣的设计师，则可参考其 nano-batching 与异步流水线的实现思路。","# Nanoflow\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_5ce0ccb11bca.png\" alt=\"Image description\" width=\"500\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12757\">Paper\u003C\u002Fa> | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\">Slides\u003C\u002Fa>\n\u003C\u002Fp>\n\n\n\nNanoFlow is a throughput-oriented high-performance serving framework for LLMs.  NanoFlow consistently delivers superior throughput compared to vLLM, Deepspeed-FastGen, and TensorRT-LLM. **NanoFlow achieves up to 1.91x throughput boost compared to TensorRT-LLM.** The key features of NanoFlow include:\n\n- **Intra-device parallelism**: Maximizes hardware utilization by exploiting nano-batching and execution unit scheduling to overlap different resource demands inside a single device.\n- **Asynchronous CPU scheduling**: Achieves highly efficient CPU scheduling by adopting asynchronous control flow for GPU execution, CPU batch formation and KV-cache management.\n\n\n\n## News\n- [2024\u002F09] 🚀 Nanoflow now supports Llama2 70B, Llama3 70B, Llama3.1 70B, Llama3 8B, Llama3.1 8B and Qwen2 72B models. We also released experiment scripts to reproduce the evaluation results.\n\n## Introduction\n\n\n\nThe key insight behinds NanoFlow is that traditional pipeline design of existing frameworks under-utilizes hardware resources due to the sequential execution of operations. Therefore, NanoFlow proposes intra-device parallelism (as shown in the following gif), which use nano-batches to schedule the compute-, memory-, network-bound operations for simultaneous execution. Such overlapping leaves compute-bound operations on the critical path and boost the resource utilization.\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_f66711d03946.png\" alt=\"system design\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>Overview of NanoFlow's key components\u003C\u002Fem>\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_79144897981d.gif\" alt=\"system design\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>Illustration of intra-device parallelism\u003C\u002Fem>\u003C\u002Fp>\n\nWith highly utilized GPU, the overhead of CPU, which consists of KV-cache management, batch formation, and retired requests selection, takes significant part ($>10$%) of inference time. Therefore, NanoFlow adopts an asyncronous control flow as shown in the following figure. At any iteration $i$, NanoFlow makes batching decisions and allocates the KV-cache entries for the next iteration before the end of the current iteration. NanoFlow directly launches iteration $i + 1$ without detecting the end-of-sequence (EOS) tokens generated in iteration $i$ and retires completed requests at iteration $i+2$.\n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_574730102f4a.png\" alt=\"system design\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>Explanation of asyncronous control flow scheduling\u003C\u002Fem>\u003C\u002Fp>\n\nTo avoid recomputation and reuse the KV-cache from multi-round conversations, NanoFlow eagerly offloads the KV-cache of finished requests to SSDs. In one iteration, NanoFlow selects the KV-cache of the retired requests and copies them to the host in parallel to the on-the-fly inference operations, via a layer-by-layer manner. Our calculation shows that only 5GB\u002Fs are needed for the offloading bandwidth of serving LLaMA2-70B, while a single SSD can reach 3GB\u002Fs. \n\nWith all mentioned techniques implemented, we now open-source NanoFlow of a Cpp-based backend and a Python-based demo frontend in ~4K lines. NanoFlow integrates state-of-the-art kernel libraries including [CUTLASS](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass) for GEMM, [FlashInfer](https:\u002F\u002Fgithub.com\u002Fflashinfer-ai\u002Fflashinfer) for Attention, and [MSCCL++](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fmscclpp) for Network. This codebase also contains necessary scripts for environment setup and experiment reproduction.\n\n## Benchmarks\nWe list some of the primary benchmarks. Please check our paper for more details. We evaluate on A100 80GB SXM and choose [vLLM v0.5.3](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm\u002Fpull\u002F6696), [Deepspeed-FastGen v0.2.3](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed-MII\u002Fpull\u002F433), and [TensorRT-LLM v0.8.0](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM\u002Ftree\u002Fv0.8.0) as baselines. Note that all frameworks turn off specific optimizations like quantization, speculative decoding, prefix cache, etc..\n### Offline throughput: Llama2-70B on 8xA100 (80GB)\nWe conduct offline througput in two settings: practical workloads from collected traces ([Splitwise](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18677), [LMSYS-Chat-1M](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.11998), [ShareGPT](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fanon8231489123\u002FShareGPT_Vicuna_unfiltered)), and constant input\u002Foutput length. NanoFlow consistently surpasses all the baselines.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_ecd1b4332f8d.png\" alt=\"system design\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>Offline throughput benchmarks\u003C\u002Fem>\u003C\u002Fp>\n\n### Online latency: Llama2-70B on 8xA100 (80GB)\nWe test the normalized latency (which is the end-to-end request latency divided by number of output tokens) with the three real-world traces and set different request rate (incoming requests per second). NanoFlow is able to sustain a higher request rate with low latency compared to baselines among all the datasets.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_14d126e520ec.png\" alt=\"system design\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>Online latency benchmarks\u003C\u002Fem>\u003C\u002Fp>\n\n### Feasibility: offline throughput on different models\nWe ported NanoFlow to 5 representative models to showcase its flexibility. We evaluate the offline throughput of NanoFlow (tokens per second per GPU) on these LLMs with constant length of input 1024 and output 512.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_30ae8794038a.png\" alt=\"system design\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>Offline throughput of NanoFlow on different models\u003C\u002Fem>\u003C\u002Fp>\n\n# Codebase\n## Abstract\n\nThe increasing usage of Large Language Models (LLMs) has resulted in a surging demand for planet-scale serving systems, where tens of thousands of GPUs continuously serve hundreds of millions of users. Consequently, throughput (under reasonable latency constraints) has emerged as a key metric that determines serving systems’ performance. To boost throughput, various methods of inter-device parallelism (e.g., data, tensor, pipeline) have been explored. However, existing methods do not consider overlapping the utilization of different resources within a single device, leading to underutilization and sub-optimal performance.\n\nWe propose NanoFlow, a novel serving framework that exploits intra-device parallelism, which overlaps the usage of resources including compute, memory, and network within a single device through operation co-scheduling. To exploit intra-device parallelism, NanoFlow introduces two key innovations: First, NanoFlow proposes nano-batching to split requests at the granularity of operations, which breaks the dependency of sequential operations in LLM inference and enables overlapping them; then, to get benefit from overlapping, NanoFlow uses a device-level pipeline with execution unit scheduling, which partitions the device’s functional units and simultaneously executes different operations in each unit. NanoFlow automates the pipeline setup using a parameter search algorithm, which enables easily porting NanoFlow to work with different models. We implement NanoFlow on NVIDIA GPUs and evaluate end-to-end serving throughput on several popular models such as LLaMA-2-70B, Mixtral 8×7B, LLaMA-3-8B, etc. We show that NanoFlow achieves 68.5% of optimal throughput. With practical workloads, NanoFlow provides 1.91× throughput boost compared to state-of-the-art serving systems achieving 59% to 72% of optimal throughput across ported models.\n\n## Installation\n### Docker setup\n\n``` bash\nmkdir -p ~\u002Fframework-test\ndocker run --gpus all --net=host --privileged -v \u002Fdev\u002Fshm:\u002Fdev\u002Fshm --name nanoflow -v ~\u002Fframework-test:\u002Fcode -it nvcr.io\u002Fnvidia\u002Fcuda:12.8.1-cudnn-devel-ubuntu22.04\n\napt update\napt install pybind11-dev\napt install liburing-dev\napt install libopenmpi-dev\nsysctl -w kernel.io_uring_disabled=0\nsysctl -w vm.nr_hugepages=65536\n```\n\n### Gurobi License Setup (for Docker)\n\nFollow these steps to obtain a Gurobi license and configure it so your Docker container can use it.\n\n#### 1. Request a Gurobi License\n\n1. Go to the Gurobi website and create an account (https:\u002F\u002Fwww.gurobi.com\u002F).\n2. After logging in, navigate to **My Gurobi → Get License**.\n3. Choose the \"WLS Academic\" license type and fill out any required fields.\n4. Gurobi will email you a license file named `gurobi.lic` (or provide you with a license key string).\n\n#### 2. Place the License on Your Host Machine\n``` bash\nmkdir -p ~\u002Fgurobi\u002Flicense\nmv \u002Fpath\u002Fto\u002Fdownloaded\u002Fgurobi.lic ~\u002Fgurobi\u002Flicense\u002F\nls ~\u002Fgurobi\u002Flicense\necho \"export GRB_LICENSE_FILE=$(pwd)\u002Fgurobi.lic\" >> ~\u002F.bashrc\n```\n\n### Install Dependencies\n\n``` bash\ngit clone git@github.com:efeslab\u002FNanoflow.git\ncd Nanoflow\nchmod +x .\u002FinstallAnaconda.sh\n.\u002FinstallAnaconda.sh\n# restart the terminal\nsource ~\u002F.bashrc\n\n# login to huggingface\nmkdir -p hf\necho \"export HF_HOME=$(pwd)\u002Fhf\" >> ~\u002F.bashrc\nsource ~\u002F.bashrc\n\nhuggingface-cli login\n\n# set up the environment\ncd Nanoflow-python\nyes | bash setup.sh\n```\n\n### Build\n\n``` bash\ncd pybind\nmkdir -p build\ncmake ..\nmake -j 256\n```\n\n### End-to-end Test\n\n``` bash\ncd entry\npython run_llama3.py -load_hf_weight=True\n```\n\n## Citation\n\nIf you use NanoFlow for your research, please cite our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12757):\n```bibtex\n@misc{zhu2024nanoflowoptimallargelanguage,\n      title={NanoFlow: Towards Optimal Large Language Model Serving Throughput}, \n      author={Kan Zhu and Yilong Zhao and Liangyu Zhao and Gefei Zuo and Yile Gu and Dedong Xie and Yufei Gao and Qinyu Xu and Tian Tang and Zihao Ye and Keisuke Kamahori and Chien-Yu Lin and Stephanie Wang and Arvind Krishnamurthy and Baris Kasikci},\n      year={2024},\n      eprint={2408.12757},\n      archivePrefix={arXiv},\n      primaryClass={cs.DC},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12757}, \n}\n```\n\n## Acknowledgement\nNanoFlow is inspired by and reuses code from the following projects: [CUTLASS](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass), [FlashInfer](https:\u002F\u002Fgithub.com\u002Fflashinfer-ai\u002Fflashinfer), [MSCCL++](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fmscclpp), and [Punica](https:\u002F\u002Fgithub.com\u002Fpunica-ai\u002Fpunica). Development of NanoFlow is made easier thanks to these tools: [GoogleTest](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fgoogletest), [NVBench](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvbench), and [spdlog](https:\u002F\u002Fgithub.com\u002Fgabime\u002Fspdlog). We thank Siqin Chen for her help in the design of NanoFlow logo.\n","# Nanoflow\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_5ce0ccb11bca.png\" alt=\"Image description\" width=\"500\">\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12757\">论文\u003C\u002Fa> | \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\">幻灯片\u003C\u002Fa>\n\u003C\u002Fp>\n\n\n\nNanoFlow 是一个面向高吞吐量的高性能大语言模型推理框架。与 vLLM、Deepspeed-FastGen 和 TensorRT-LLM 相比，NanoFlow 始终能够提供更出色的吞吐量。**与 TensorRT-LLM 相比，NanoFlow 的吞吐量最高可提升 1.91 倍。** NanoFlow 的主要特性包括：\n\n- **设备内并行化**：通过利用纳米批处理和执行单元调度，将单个设备内的不同资源需求进行重叠，从而最大化硬件利用率。\n- **异步 CPU 调度**：通过采用异步控制流来管理 GPU 执行、CPU 批次生成以及 KV 缓存管理，实现高效的 CPU 调度。\n\n\n## 新闻\n- [2024\u002F09] 🚀 Nanoflow 现已支持 Llama2 70B、Llama3 70B、Llama3.1 70B、Llama3 8B、Llama3.1 8B 以及 Qwen2 72B 模型。我们还发布了实验脚本，用于复现评估结果。\n\n## 简介\n\n\n\nNanoFlow 的核心理念在于：现有框架的传统流水线设计由于操作的顺序执行，导致硬件资源利用率低下。因此，NanoFlow 提出了设备内并行化（如以下动图所示），通过使用纳米批处理来调度计算密集型、内存密集型和网络密集型的操作，使其能够同时执行。这种重叠使得关键路径上的计算密集型操作得以保留，从而大幅提升资源利用率。\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_f66711d03946.png\" alt=\"系统设计\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>NanoFlow 的关键组件概览\u003C\u002Fem>\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_79144897981d.gif\" alt=\"系统设计\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>设备内并行化的示意图\u003C\u002Fem>\u003C\u002Fp>\n\n在 GPU 高度利用率的情况下，CPU 的开销——包括 KV 缓存管理、批次生成以及已结束请求的筛选——占到了推理时间的相当大比例（超过 10%）。因此，NanoFlow 采用了如下图所示的异步控制流。在任意迭代 i 中，NanoFlow 会在当前迭代结束之前就做出下一轮的批次决策，并为下一轮分配 KV 缓存条目。NanoFlow 不会等待迭代 i 中生成的序列结束标记（EOS）出现，而是直接启动迭代 i+1，并在迭代 i+2 时才对已完成的请求进行清理。\n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_574730102f4a.png\" alt=\"系统设计\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>异步控制流调度的说明\u003C\u002Fem>\u003C\u002Fp>\n\n为了避免重复计算并复用多轮对话中的 KV 缓存，NanoFlow 会主动将已完成请求的 KV 缓存卸载到 SSD 上。在一次迭代中，NanoFlow 会选取已退役请求的 KV 缓存，并以逐层的方式将其并行复制到主机端，同时进行实时推理操作。我们的测算表明，服务 LLaMA2-70B 仅需 5 GB\u002Fs 的卸载带宽，而单块 SSD 的带宽即可达到 3 GB\u002Fs。\n\n综合以上各项技术，我们现在开源了 NanoFlow，包含基于 C++ 的后端和基于 Python 的演示前端，代码总量约为 4000 行。NanoFlow 集成了最先进的内核库，包括用于 GEMM 的 [CUTLASS](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass)、用于注意力机制的 [FlashInfer](https:\u002F\u002Fgithub.com\u002Fflashinfer-ai\u002Fflashinfer) 以及用于网络通信的 [MSCCL++](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fmscclpp)。此外，该代码库还包含了环境搭建和实验复现所需的必要脚本。\n\n## 基准测试\n我们列出了一些主要的基准测试结果。更多细节请参阅我们的论文。我们在 A100 80GB SXM 上进行评估，并选择 [vLLM v0.5.3](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm\u002Fpull\u002F6696)、[Deepspeed-FastGen v0.2.3](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed-MII\u002Fpull\u002F433) 和 [TensorRT-LLM v0.8.0](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM\u002Ftree\u002Fv0.8.0) 作为基准。请注意，所有框架均关闭了量化、推测解码、前缀缓存等特定优化功能。\n### 离线吞吐量：Llama2-70B 在 8×A100（80GB）上\n我们分别在两种场景下进行了离线吞吐量测试：一是基于收集到的轨迹的实际工作负载（[Splitwise](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18677)、[LMSYS-Chat-1M](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.11998)、[ShareGPT](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fanon8231489123\u002FShareGPT_Vicuna_unfiltered)），二是输入输出长度恒定的情况。NanoFlow 始终优于所有基准。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_ecd1b4332f8d.png\" alt=\"系统设计\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>离线吞吐量基准测试\u003C\u002Fem>\u003C\u002Fp>\n\n### 在线延迟：Llama2-70B 在 8×A100（80GB）上\n我们针对三种真实世界轨迹，测试了归一化延迟（即端到端请求延迟除以输出 token 数），并设置了不同的请求速率（每秒到达的请求数）。在所有数据集中，NanoFlow 均能以更低的延迟维持更高的请求速率，优于其他基准。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_14d126e520ec.png\" alt=\"系统设计\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>在线延迟基准测试\u003C\u002Fem>\u003C\u002Fp>\n\n### 可行性：不同模型上的离线吞吐量\n我们将 NanoFlow 移植到了 5 个具有代表性的模型上，以展示其灵活性。我们在输入长度固定为 1024、输出长度固定为 512 的情况下，评估了 NanoFlow 在这些大语言模型上的离线吞吐量（每 GPU 每秒的 token 数）。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_readme_30ae8794038a.png\" alt=\"系统设计\" width=\"90%\">\n\u003C\u002Fp>\n\u003Cp align=\"center\">\u003Cem>NanoFlow 在不同模型上的离线吞吐量\u003C\u002Fem>\u003C\u002Fp>\n\n# 代码库\n\n## 摘要\n\n大型语言模型（LLMs）的广泛应用导致了对行星级规模服务系统的需求激增，这类系统需要数以万计的GPU持续为数亿用户提供服务。因此，在合理的延迟约束下，吞吐量已成为衡量服务系统性能的关键指标。为了提升吞吐量，人们已探索了多种设备间并行化方法，如数据并行、张量并行和流水线并行等。然而，现有方法并未考虑在同一设备内部不同资源的使用重叠问题，从而导致资源利用率低下、性能无法达到最优。\n\n我们提出了NanoFlow这一新型服务框架，通过操作协同调度的方式，充分挖掘设备内部的并行性，实现计算、内存和网络等资源的重叠利用。为实现设备内并行，NanoFlow引入了两项关键创新：首先，NanoFlow提出纳米批处理技术，将请求按操作粒度进行拆分，打破LLM推理中顺序操作的依赖关系，从而实现操作间的重叠执行；其次，为了充分利用这种重叠效应，NanoFlow采用了一种基于执行单元调度的设备级流水线机制，该机制将设备的功能单元进行划分，并在每个单元中同时执行不同的操作。NanoFlow还借助参数搜索算法自动完成流水线的配置，使得NanoFlow能够轻松适配不同模型。我们在NVIDIA GPU上实现了NanoFlow，并在LLaMA-2-70B、Mixtral 8×7B、LLaMA-3-8B等多款主流模型上评估了端到端的服务吞吐量。实验结果表明，NanoFlow可达到最优吞吐量的68.5%。在实际工作负载下，与当前最先进的服务系统相比，NanoFlow的吞吐量提升了1.91倍，而后者在移植后的模型上仅能达到最优吞吐量的59%至72%。\n\n## 安装\n### Docker环境搭建\n\n```bash\nmkdir -p ~\u002Fframework-test\ndocker run --gpus all --net=host --privileged -v \u002Fdev\u002Fshm:\u002Fdev\u002Fshm --name nanoflow -v ~\u002Fframework-test:\u002Fcode -it nvcr.io\u002Fnvidia\u002Fcuda:12.8.1-cudnn-devel-ubuntu22.04\n\napt update\napt install pybind11-dev\napt install liburing-dev\napt install libopenmpi-dev\nsysctl -w kernel.io_uring_disabled=0\nsysctl -w vm.nr_hugepages=65536\n```\n\n### Gurobi许可证配置（适用于Docker）\n\n请按照以下步骤获取Gurobi许可证并进行配置，以便您的Docker容器能够使用该许可证。\n\n#### 1. 申请Gurobi许可证\n1. 访问Gurobi官网并注册账号（https:\u002F\u002Fwww.gurobi.com\u002F）。\n2. 登录后，进入**My Gurobi → Get License**页面。\n3. 选择“WLS Academic”许可证类型，并填写相关必要信息。\n4. Gurobi会向您发送一封包含许可证文件`gurobi.lic`的电子邮件（或提供许可证密钥字符串）。\n\n#### 2. 将许可证放置于宿主机\n```bash\nmkdir -p ~\u002Fgurobi\u002Flicense\nmv \u002Fpath\u002Fto\u002Fdownloaded\u002Fgurobi.lic ~\u002Fgurobi\u002Flicense\u002F\nls ~\u002Fgurobi\u002Flicense\necho \"export GRB_LICENSE_FILE=$(pwd)\u002Fgurobi.lic\" >> ~\u002F.bashrc\n```\n\n### 安装依赖\n```bash\ngit clone git@github.com:efeslab\u002FNanoflow.git\ncd Nanoflow\nchmod +x .\u002FinstallAnaconda.sh\n.\u002FinstallAnaconda.sh\n# 重启终端\nsource ~\u002F.bashrc\n\n# 登录Hugging Face\nmkdir -p hf\necho \"export HF_HOME=$(pwd)\u002Fhf\" >> ~\u002F.bashrc\nsource ~\u002F.bashrc\n\nhuggingface-cli login\n\n# 配置环境\ncd Nanoflow-python\nyes | bash setup.sh\n```\n\n### 构建\n```bash\ncd pybind\nmkdir -p build\ncmake ..\nmake -j 256\n```\n\n### 端到端测试\n```bash\ncd entry\npython run_llama3.py -load_hf_weight=True\n```\n\n## 引用\n如果您在研究中使用NanoFlow，请引用我们的论文（arXiv预印本）：\n```bibtex\n@misc{zhu2024nanoflowoptimallargelanguage,\n      title={NanoFlow: Towards Optimal Large Language Model Serving Throughput}, \n      author={Kan Zhu and Yilong Zhao and Liangyu Zhao and Gefei Zuo and Yile Gu and Dedong Xie and Yufei Gao and Qinyu Xu and Tian Tang and Zihao Ye and Keisuke Kamahori and Chien-Yu Lin and Stephanie Wang and Arvind Krishnamurthy and Baris Kasikci},\n      year={2024},\n      eprint={2408.12757},\n      archivePrefix={arXiv},\n      primaryClass={cs.DC},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.12757}, \n}\n```\n\n## 致谢\nNanoFlow的灵感来源于并复用了以下项目的代码：[CUTLASS](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass)、[FlashInfer](https:\u002F\u002Fgithub.com\u002Fflashinfer-ai\u002Fflashinfer)、[MSCCL++](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fmscclpp)以及[Punica](https:\u002F\u002Fgithub.com\u002Fpunica-ai\u002Fpunica)。此外，GoogleTest、NVBench和spdlog等工具也为NanoFlow的开发提供了极大便利。我们特别感谢陈思琴在NanoFlow标志设计方面的帮助。","# NanoFlow 快速上手指南\n\n## 环境准备\n- **系统**：Ubuntu 22.04（推荐）\n- **GPU**：NVIDIA A100 80 GB（官方测试环境）\n- **CUDA**：12.8+\n- **Docker**：已安装 nvidia-docker\n- **网络**：可访问 Hugging Face 与 Gurobi 官网\n\n## 安装步骤\n\n### 1. 启动容器\n```bash\nmkdir -p ~\u002Fframework-test\ndocker run --gpus all --net=host --privileged -v \u002Fdev\u002Fshm:\u002Fdev\u002Fshm --name nanoflow -v ~\u002Fframework-test:\u002Fcode -it nvcr.io\u002Fnvidia\u002Fcuda:12.8.1-cudnn-devel-ubuntu22.04\n```\n\n### 2. 安装系统依赖\n```bash\napt update\napt install -y pybind11-dev liburing-dev libopenmpi-dev\nsysctl -w kernel.io_uring_disabled=0\nsysctl -w vm.nr_hugepages=65536\n```\n\n### 3. 配置 Gurobi 许可证（学术用户）\n1. 访问 https:\u002F\u002Fwww.gurobi.com 注册账号  \n2. 在「My Gurobi → Get License」选择 **WLS Academic** 并下载 `gurobi.lic`  \n3. 将许可证挂载到容器：\n```bash\nmkdir -p ~\u002Fgurobi\u002Flicense\n# 把下载的 gurobi.lic 放到 ~\u002Fgurobi\u002Flicense\u002F\necho \"export GRB_LICENSE_FILE=\u002Fgurobi\u002Flicense\u002Fgurobi.lic\" >> ~\u002F.bashrc\n```\n\n### 4. 克隆与安装\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow.git\ncd Nanoflow\nchmod +x .\u002FinstallAnaconda.sh && .\u002FinstallAnaconda.sh\nsource ~\u002F.bashrc\n\n# 登录 Hugging Face（国内可配镜像）\nmkdir -p hf\necho \"export HF_HOME=$(pwd)\u002Fhf\" >> ~\u002F.bashrc\nsource ~\u002F.bashrc\nhuggingface-cli login\n\n# 一键环境\ncd Nanoflow-python\nyes | bash setup.sh\n```\n\n### 5. 编译\n```bash\ncd pybind\nmkdir -p build && cd build\ncmake .. && make -j$(nproc)\n```\n\n## 基本使用\n```bash\ncd entry\npython run_llama3.py -load_hf_weight=True\n```\n首次运行会自动下载 Llama-3-8B 权重并启动本地推理服务，默认端口 8000。","一家做 AI 客服 SaaS 的初创公司，需要在 4 张 A100 上同时跑 Llama3-70B 为 2000 家电商店铺提供 7×24 小时智能客服。\n\n### 没有 Nanoflow 时\n- 峰值并发 800 条对话时，GPU 利用率只有 55%，剩余算力被 KV-cache 管理、batch 排队浪费掉，平均每条回复 2.8 秒，用户开始抱怨“机器人反应慢”。  \n- 为了撑住晚高峰，运维同学只能手动把实例数从 4 扩到 8，云账单瞬间翻倍，CEO 在群里发“成本控制”的表情包。  \n- 多轮对话的 KV-cache 无法复用，用户第二次提问就要重新算一遍，导致 30% 的 token 白白烧掉，月度预算提前 10 天见底。  \n- 由于 CPU 调度是同步的，GPU 每完成一次推理都要等 CPU 选 batch，结果 GPU 空转 12% 的时间，工程师自嘲“给 NVIDIA 打白工”。\n\n### 使用 Nanoflow 后\n- 同样的 4 张 A100，借助 intra-device parallelism 把 nano-batch 与计算\u002F内存\u002F网络操作重叠，GPU 利用率飙到 92%，平均回复时间降到 1.4 秒，用户满意度评分从 3.8 升到 4.6。  \n- 晚高峰无需扩容，单实例吞吐提升 1.8 倍，云账单直接腰斩，财务把省下的 2 万美元划给市场做投放。  \n- 对话结束后 KV-cache 立即分层异步落盘，用户回来继续聊时 95% 的 cache 秒级召回，token 消耗下降 28%，预算撑到月底还有余粮。  \n- 异步 CPU 调度让 GPU 零等待，CPU 提前为下一轮推理准备好 batch 和 cache，整体延迟再降 15%，工程师终于能在周末安心陪家人。\n\nNanoflow 用一张卡的算力打出两张卡的效果，让初创公司把省下的钱花在增长而不是机房里。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fefeslab_Nanoflow_ecd1b433.png","efeslab","Efeslab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fefeslab_5bf4b602.png","Efeslab at the University of Washington",null,"https:\u002F\u002Fhomes.cs.washington.edu\u002F~baris\u002F","https:\u002F\u002Fgithub.com\u002Fefeslab",[83,87,91,95,99,103,107,111],{"name":84,"color":85,"percentage":86},"Jupyter Notebook","#DA5B0B",47.7,{"name":88,"color":89,"percentage":90},"Python","#3572A5",41,{"name":92,"color":93,"percentage":94},"Cuda","#3A4E3A",7.1,{"name":96,"color":97,"percentage":98},"C++","#f34b7d",2.9,{"name":100,"color":101,"percentage":102},"CMake","#DA3434",0.8,{"name":104,"color":105,"percentage":106},"Shell","#89e051",0.3,{"name":108,"color":109,"percentage":110},"C","#555555",0.1,{"name":112,"color":113,"percentage":114},"Makefile","#427819",0,952,48,"2026-04-02T04:29:06",4,"Linux","必需 NVIDIA GPU，官方测试使用 A100 80GB SXM，CUDA 12.8.1（nvcr.io\u002Fnvidia\u002Fcuda:12.8.1-cudnn-devel-ubuntu22.04 镜像）","未说明",{"notes":123,"python":124,"dependencies":125},"需 Docker 环境并启用 --gpus all；需 Gurobi 学术许可证；需 Hugging Face 登录；需配置 65536 个 hugepages；首次运行需下载 Hugging Face 模型权重","未说明（通过 installAnaconda.sh 自动安装 Anaconda 环境）",[126,127,128,129,130,131,132],"CUTLASS","FlashInfer","MSCCL++","pybind11","liburing","libopenmpi","Gurobi",[26,13],[135,136,137,138,139,140],"cuda","inference","llama2","llm","llm-serving","model-serving","2026-03-27T02:49:30.150509","2026-04-06T09:46:11.989183",[144,149,154,159,164,169,174,179,183],{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},6030,"NanoFlow 如何为不同 kernel 分配 SM？","对于 CUTLASS GEMM，通过 `cta_m\u002Fcta_n\u002Fcta_k` 三个参数决定 grid size，从而间接控制每个 kernel 占用的 SM 数量；具体代码见：\n- 生成逻辑：https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fblob\u002F22f0b48739d3a9ad1d8c82f956906b3bc58d519b\u002Fpipeline\u002Fsrc\u002Fgenerate-gemm\u002FgenGEMM.py\n- 调用处：https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fblob\u002F22f0b48739d3a9ad1d8c82f956906b3bc58d519b\u002Fpipeline\u002Finclude\u002FcutlassGemmWrapperImpl.cuh#L92\n\n对于 Attention kernel，grid size 在 prefill.cuh 中固定为：\nhttps:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fblob\u002F22f0b48739d3a9ad1d8c82f956906b3bc58d519b\u002Fgemv\u002Finclude\u002Fattention\u002Fprefill.cuh#L1842","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F13",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},6031,"为什么 serve.py 输出全是乱码\u002F重复词？","当前版本仅在 8×A100 上验证通过。当 GPU 数量不足 8 张时，NanoFlow 会把缺失 GPU 的结果视为空，导致输出错误。4090\u002F3090 等消费级卡无 Nvlink，通信效率低，需要重新设计 pipeline 才能支持更少 GPU，但会牺牲吞吐。","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F5",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},6032,"论文图 4 中的 UGD 指什么？","UGD 是 Up Gate Down 的缩写，对应 LLaMA 模型中的 MLP 三层（up-proj、gate-proj、down-proj）。","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F1",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},6033,"运行时报错 “peer access is not supported between these two device” 怎么办？","该错误说明 GPU 不支持 P2P（peer access）。只有数据中心\u002F工作站级显卡（V100、A100、H100、Ada 6000 等）才支持 P2P；消费级卡如 RTX 3090\u002F4090 不支持，无法运行需要 P2P 的代码路径。","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F22",{"id":165,"question_zh":166,"answer_zh":167,"source_url":168},6034,"编译时 CPU 内存不足怎么办？","在 50 GB 或更少内存的机器上，建议减少并行编译进程数：\n1. 手动调低 `make -j` 的并行度；\n2. 也可直接使用社区提供的自动构建脚本（见 PR #6）：https:\u002F\u002Fgithub.com\u002FAlongWY\u002FNanoflow\u002Factions\u002Fruns\u002F10617958844。","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F12",{"id":170,"question_zh":171,"answer_zh":172,"source_url":173},6035,"如何复现论文中的 benchmark？","官方将在随后一周内发布用于复现 benchmark 的脚本，届时可直接使用 ShareGPT 或其他自定义数据集进行测试。","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F4",{"id":175,"question_zh":176,"answer_zh":177,"source_url":178},6036,"论文中的 Compute Utilization 是如何计算的？","Compute Utilization 定义为“实际运行时的吞吐 ÷ 理论峰值吞吐”。具体计算方式：\n- 实际吞吐：通过 profiler 测得 kernel 运行时间后，用完成的 FLOPs 除以时间；\n- 理论峰值：根据 GPU 规格（SM 数、频率、Tensor Core 峰值算力）计算。","https:\u002F\u002Fgithub.com\u002Flab\u002FNanoflow\u002Fissues\u002F35",{"id":180,"question_zh":181,"answer_zh":182,"source_url":158},6037,"如何设置 NUMA n绑定以获得 1.27× 带宽提升？","关键做法是把线程绑定到与 GPU 直接相连的 NUMA 节点，确保 KV-Cache 的 CPU↔GPU 拷贝走本地 NUMA 路径。参考实现：\n- 代码位置：`\u002Fsrc\u002FcomputeBound.cu#L100`\n- 需要在主机上通过 `numactl --hardware` 查看 NUMA 拓扑，然后用 `numactl --cpunodebind=X --membind=X` 或 `taskset` 绑定线程。",{"id":184,"question_zh":185,"answer_zh":186,"source_url":187},6038,"GEMM 与通信 kernel 同时运行导致性能下降，如何缓解？","即使 grid size 小于 SM 数，GEMM 与 NCCL P2P 仍会竞争内存带宽，导致两者都变慢。目前 NanoFlow 没有完美解决方案，理论上需要类似 mutex 的机制在 kernel 间同步，但尚无高效实现。可考虑：\n- 使用 mscclpp 替代 NCCL；\n- 手动拆分计算与通信阶段，避免同时执行。","https:\u002F\u002Fgithub.com\u002Fefeslab\u002FNanoflow\u002Fissues\u002F14",[]]