[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-byungsoo-oh--ml-systems-papers":3,"tool-byungsoo-oh--ml-systems-papers":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160411,2,"2026-04-18T23:33:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":79,"stars":82,"forks":83,"last_commit_at":84,"license":79,"difficulty_score":85,"env_os":86,"env_gpu":87,"env_ram":87,"env_deps":88,"category_tags":91,"github_topics":92,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":96,"updated_at":97,"faqs":98,"releases":99},9481,"byungsoo-oh\u002Fml-systems-papers","ml-systems-papers","Curated collection of papers in machine learning systems","ml-systems-papers 是一个精心整理的机器学习系统领域学术论文合集，旨在为从业者和研究者提供一站式的前沿技术文献导航。随着大模型和分布式训练的快速发展，如何高效处理数据、优化 GPU 资源调度、加速推理以及降低通信开销成为行业痛点，而相关研究往往分散在各处难以追踪。这份清单系统地解决了信息碎片化问题，将海量论文按数据处理、训练系统、推理优化、显存管理、编译器技术及联邦学习等二十多个关键主题进行分类梳理，甚至特别标注了综述文章，帮助用户快速把握领域全貌。\n\n该资源特别适合 AI 系统工程师、算法研究人员以及对底层架构感兴趣的高校师生使用。无论是需要寻找特定场景（如 LLM 长上下文优化、MoE 架构或 RAG 系统）的解决方案，还是希望深入了解数据流水线瓶颈与容错机制，都能在此找到高质量的参考依据。其独特亮点在于更新及时且分类细致，不仅涵盖了传统的分布式训练与资源调度，还紧跟趋势收录了智能体系统、混合大模型及 RL 后训练等新兴方向的最新成果，是构建高效、稳定机器学习基础设施不可或缺的案头指南。","# Paper List for Machine Learning Systems\n\n![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)\n[![PRs Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Fbyungsoo-oh\u002Fml-systems-papers\u002Fpulls)\n\nPaper list for broad topics in machine learning systems \n> NOTE: Survey papers are annotated with [Survey 🔍] prefix.\n\n## Table of Contents\n\u003C!-- TOC -->\n\n- [Paper List for Machine Learning Systems](#paper-list-for-machine-learning-systems)\n  - [Table of Contents](#table-of-contents)\n  - [Data Processing](#data-processing)\n    - [Data pipeline optimization](#data-pipeline-optimization)\n    - [Caching and distributed storage for ML training](#caching-and-distributed-storage-for-ml-training)\n    - [LLM data plane](#llm-data-plane)\n    - [Others](#others)\n  - [Training System](#training-system)\n    - [ML job analysis on GPU clusters](#ml-job-analysis-on-gpu-clusters)\n    - [Resource scheduling](#resource-scheduling)\n    - [Distributed training](#distributed-training)\n    - [AutoML](#automl)\n    - [GNN training system](#gnn-training-system)\n  - [Inference System](#inference-system)\n  - [Attention Optimization](#attention-optimization)\n  - [Mixture of Experts (MoE)](#mixture-of-experts-moe)\n  - [Communication Optimization \\& Network Infrastructure for Distributed ML](#communication-optimization--network-infrastructure-for-distributed-ml)\n  - [Fault tolerance \\& Straggler mitigation](#fault-tolerance--straggler-mitigation)\n  - [GPU Memory Management \\& Optimization](#gpu-memory-management--optimization)\n  - [GPU Sharing](#gpu-sharing)\n  - [Compiler](#compiler)\n  - [GPU Kernel Optimization](#gpu-kernel-optimization)\n  - [LLM Long Context](#llm-long-context)\n  - [Model Compression](#model-compression)\n  - [Federated Learning](#federated-learning)\n  - [Privacy-Preserving ML](#privacy-preserving-ml)\n  - [ML APIs \\& Application-Side Optimization](#ml-apis--application-side-optimization)\n  - [ML for Systems](#ml-for-systems)\n  - [Energy Efficiency](#energy-efficiency)\n  - [Retrieval-Augmented Generation (RAG)](#retrieval-augmented-generation-rag)\n  - [Simulation](#simulation)\n  - [Systems for Agentic AI](#systems-for-agentic-ai)\n  - [RL Post-Training](#rl-post-training)\n  - [Multimodal](#multimodal)\n  - [Hybrid LLMs](#hybrid-llms)\n  - [Others](#others-1)\n- [References](#references)\n\n\u003C!-- \u002FTOC -->\n\n## Data Processing\n\n### Data pipeline optimization\n**General**\n- [arxiv'25] [Scalable and Performant Data Loading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20067)\n- [arxiv'25] [OVERLORD: Ultimate Scaling of DataLoader for Multi-Source Large Foundation Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09844)\n- [arxiv'25] [The Streaming Batch Model for Efficient and Fault-Tolerant Heterogeneous Execution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12407)\n- [arxiv'25] [In-Network Preprocessing of Recommender Systems on Multi-Tenant SmartNICs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12032)\n- [VLDB'25] [cedar: Composable and Optimized Machine Learning Input Data Pipelines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08895)\n- [HotInfra'24] [Lotus: Characterize Architecture Level CPU-based Preprocessing in Machine Learning Pipelines](https:\u002F\u002Fkexinrong.github.io\u002Flab\u002Ffiles\u002Flotus-hotinfra24.pdf)\n- [arxiv'24] [TensorSocket: Shared Data Loading for Deep Learning Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18749)\n- [arxiv'24] [Efficient Tabular Data Preprocessing of ML Pipelines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14912)\n- [MLSys'22] Plumber: Diagnosing and Removing Performance Bottlenecks in Machine Learning Data Pipelines\n- [ISCA'22] Understanding Data Storage and Ingestion for Large-Scale Deep Recommendation Model Training\n- [SIGMOD'22] Where Is My Training Bottleneck? Hidden Trade-Offs in Deep Learning Preprocessing Pipelines\n- [VLDB'21] Analyzing and Mitigating Data Stalls in DNN Training\n- [VLDB'21] tf.data: A Machine Learning Data Processing Framework\n\n**Preprocessing stalls**\n- [arxiv'24] [PREBA: A Hardware\u002FSoftware Co-Design for Multi-Instance GPU based AI Inference Servers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19114)\n- [ATC'24] [Pecan: Cost-Efficient ML Data Preprocessing with Automatic Transformation Ordering and Hybrid Placement](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fgraur)\n- [HotStorage'24] [A Selective Preprocessing Offloading Framework for Reducing Data Traffic in DL Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3655038.3665947)\n- [VLDB'24] [FusionFlow: Accelerating Data Preprocessing for Machine Learning with CPU-GPU Cooperation](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol17\u002Fp863-kim.pdf)\n- [arxiv'23] [Rinas: Training with Dataset Shuffling Can Be General and Fast](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02368)\n- [CVPR'23] [FFCV: Accelerating Training by Removing Data Bottlenecks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FLeclerc_FFCV_Accelerating_Training_by_Removing_Data_Bottlenecks_CVPR_2023_paper.pdf)\n- [RecSys'23] [InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep Recommendation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08500)\n- [SIGMOD'23] GoldMiner: Elastic Scaling of Training Data Pre-Processing Pipelines for Deep Learning\n- [VLDB'23] [FastFlow: Accelerating Deep Learning Model Training with Smart Offloading of Input Data Pipeline](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp1086-um.pdf)\n- [SoCC'23] [tf.data service: A Case for Disaggregating ML Input Data Processing](https:\u002F\u002Fanakli.inf.ethz.ch\u002Fpapers\u002Ftfdata_service_SoCC23.pdf)\n  - [arxiv version](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.14826)\n- [ATC'22] Cachew: Machine Learning Input Data Processing as a Service\n- [OSDI'22] Looking Beyond GPUs for DNN Scheduling on Multi-Tenant Clusters\n- [ICPP'19] DLBooster: Boosting End-to-End Deep Learning Workflows with Offloading Data Preprocessing Pipelines\n\n**Fetch stalls (I\u002FO)**\n- [TACO'23] [Fastensor: Optimise the Tensor I\u002FO Path from SSD to GPU for Deep Learning Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3630108)\n- [ICPP'22] Lobster: Load Balance-Aware I\u002FO for Distributed DNN Training\n- [SC'21] Clairvoyant Prefetching for Distributed Machine Learning I\u002FO\n\n**Specific workloads (GNN, DLRM)**\n- [VLDB'25] [Eliminating Data Processing Bottlenecks in GNN Training over Large Graphs via Two-level Feature Compression](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3681954.3681968)\n- [ISCA'24] [PreSto: An In-Storage Data Preprocessing System for Training Recommendation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14571)\n- [arxiv'23] [Towards Data-centric Graph Machine Learning: Review and Outlook](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.10979)\n- [arxiv'23] [FlexShard: Flexible Sharding for Industry-Scale Sequence Recommendation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02959)\n- [MLSys'23] [RecD: Deduplication for End-to-End Deep Learning Recommendation Model Training Infrastructure](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Fa1126573153ad7e9f44ba80e99316482-Abstract-mlsys2023.html)\n- [ASPLOS'22] [RecShard: statistical feature-based memory optimization for industry-scale neural recommendation](https:\u002F\u002Fwww-cs.stanford.edu\u002Fpeople\u002Ftrippel\u002Fpubs\u002FRecShard-Sethi-ASPLOS-22.pdf)\n- [RecSys'23] [InTune: Reinforcement Learning-based Data Pipeline Optimization for Deep Recommendation Models\n](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08500)\n- [arxiv'23] MTrainS: Improving DLRM training efficiency using heterogeneous memories\n- [SOSP'23] [Bagpipe: Accelerating Deep Recommendation Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.12429)\n- [SOSP'23] gSampler: General and Efficient GPU-based Graph Sampling for Graph Learning\n- [NSDI'23] BGL: GPU-Efficient GNN Training by Optimizing Graph Data I\u002FO and Preprocessing\n- [DAC'22] A Joint Management Middleware to Improve Training Performance of Deep Recommendation Systems with SSDs\n- [VLDB'22] Accelerating Recommendation System Training by Leveraging Popular Choices\n\n### Caching and distributed storage for ML training\n- [ATC'25] [HyCache: Hybrid Caching for Accelerating DNN Input Preprocessing Pipelines](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fjha)\n- [ICDE'25] [MLKV: Efficiently Scaling up Large Embedding Model Training with Disk-based Key-Value Storage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01506)\n- [TPDS'23] High-Level Data Abstraction and Elastic Data Caching for Data-Intensive AI Applications on Cloud-Native Platforms\n- [SOSP'23] UGACHE: A Unified GPU Cache for Embedding-based Deep Learning\n- [ATC'23] Tectonic-Shift: A Composite Storage Fabric for Large-Scale ML Training\n- [EuroSys'23] SiloD: A Co-design of Caching and Scheduling for Deep Learning Clusters [also in [2.1](#21-dl-scheduling)]\n- [FAST'23] [SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Ffast23\u002Fpresentation\u002Fkhan)\n- [HPCA'23]  iCACHE: An Importance-Sampling-Informed Cache for Accelerating I\u002FO-Bound DNN Model Training \n- [NeurIPS'22] [A Deep Learning Dataloader with Shared Data Preparation](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F6d538a6e667960b168d3d947eb6207a6-Abstract-Conference.html)\n- [CLUSTER'22] [Hvac: Removing I\u002FO Bottleneck for Large-Scale Deep Learning Applications](https:\u002F\u002Fwww.osti.gov\u002Fservlets\u002Fpurl\u002F1902810)\n- [ICDE'22] Fluid: Dataset Abstraction and Elastic Acceleration for Cloud-native Deep Learning Training Jobs\n- [ATC'21] [Refurbish Your Training Data: Reusing Partially Augmented Samples for Faster Deep Neural Network Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc21\u002Fpresentation\u002Flee)\n- [FAST'20] [Quiver: An Informed Storage Cache for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Ffast20\u002Fpresentation\u002Fkumar)\n- [ICPP'20] [DIESEL: A Dataset-Based Distributed Storage and Caching System for Large-Scale Deep Learning Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3404397.3404472)\n- [arXiv'19] [Faster Neural Network Training with Data Echoing](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.05550)\n- [HotCloud'19] [The Case for Unifying Data Loading in Machine Learning Clusters](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fhotcloud19\u002Fpresentation\u002Fkakaraparthy)\n\n### LLM data plane\n- [SIGMOD'26] [Hydraulis: Balancing Large Transformer Model Training via Co-designing Parallel Strategies and Data Assignment](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3769802)\n- [arxiv'25] [DataFlow: An LLM-Driven Framework for Unified Data Preparation and Workflow Automation in the Era of Data-Centric AI](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.16676)\n- [EMNLP'25] [Demystifying Synthetic Data in LLM Pre-training: A Systematic Study of Scaling Laws, Benefits, and Pitfalls](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01631)\n- [ICDE'25] [Training Data Distribution Estimation for Optimized Pre-Training Data Management](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficde\u002F2025\u002F360300e640\u002F26FZD2zy2IM)\n- [arxiv'25] [Mixtera: A Data Plane for Foundation Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19790)\n\n### Others\n**Data formats**\n- [ECCV'22] L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training\n- [VLDB'21] Progressive compressed records: Taking a byte out of deep learning data\n\n**Data pipeline fairness and correctness**\n- [CIDR'21] Lightweight Inspection of Data Preprocessing in Native Machine Learning Pipelines\n\n**Data labeling automation**\n- [VLDB'18] Snorkel: Rapid Training Data Creation with Weak Supervision\n\n## Training System\n### ML job analysis on GPU clusters\n- [ICSE'24] [An Empirical Study on Low GPU Utilization of Deep Learning Jobs](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficse\u002F2024\u002F021700a880\u002F1V5BksrVgsg)\n- [NSDI'24] Characterization of Large Language Model Development in the Datacenter\n- [NSDI'22] MLaaS in the wild: workload analysis and scheduling in large-scale heterogeneous GPU clusters (`PAI`)\n- [ATC'19] Analysis of Large-Scale Multi-Tenant GPU Clusters for DNN Training Workloads (`Philly`)\n\n### Resource scheduling\n- [arxiv'26] [SkyNomad: On Using Multi-Region Spot Instances to Minimize AI Batch Job Cost](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06520)\n\n- [OSDI'25] [Decouple and Decompose: Scaling Resource Allocation with DeDe](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fxu)\n- [SoCC'25] [Cuckoo: Deadline-Aware Job Packing on Heterogeneous GPUs for DL Model Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3772052.3772266)\n- [arxiv'25] [Semantic-Aware Scheduling for GPU Clusters with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.03334)\n- [arxiv'25] [Holistic Heterogeneous Scheduling for Autonomous Applications using Fine-grained, Multi-XPU Abstraction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09503)\n- [arxiv'25] [Tesserae: Scalable Placement Policies for Deep Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04953)\n- [arxiv'25] [LeMix: Unified Scheduling for LLM Training and Inference on Multi-GPU Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21276)\n- [EuroSys'25] [Eva: Cost-Efficient Cloud-Based Cluster Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07437)\n- [arxiv'25] [TAPAS: Thermal- and Power-Aware Scheduling for LLM Inference in Cloud Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.02600)\n\n- [arxiv'24] [Zeal: Rethinking Large-Scale Resource Allocation with \"Decouple and Decompose\"](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11447v1)\n- [TACO'24] [Taming Flexible Job Packing in Deep Learning Training Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3711927)\n- [SoCC'24] [Kale: Elastic GPU Scheduling for Online DL Model Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3698038.3698532)\n- [arxiv'24] [Rubick: Exploiting Job Reconfigurability for Deep Learning Cluster Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08586)\n- [SC'24] [PAL: A Variability-Aware Policy for Scheduling ML Workloads in GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11919)\n- [OSDI'24] [MAST: Global Scheduling of ML Training across Geo-Distributed Datacenters at Hyperscale](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fchoudhury)\n- [ASPLOS'24] [Heet: Accelerating Elastic Training in Heterogeneous Deep Learning Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640375)\n- [Middleware'24] [Optimal Resource Efficiency with Fairness in Heterogeneous GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.18545)\n- [IPDPS'24] Hadar: Heterogeneity-Aware Optimization-Based Online Scheduling for Deep Learning Cluster\n- [EuroSys'24] [Blox: A Modular Toolkit for Deep Learning Schedulers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12621)\n- [NSDI'24] Swing: Short-cutting Rings for Higher Bandwidth Allreduce\n- [NSDI'24] Towards Domain-Specific Network Transport for Distributed DNN Training\n- [NSDI'24] Vulcan: Automatic Query Planning for Live ML Analytics\n- [NSDI'24] [CASSINI: Network-Aware Job Scheduling in Machine Learning Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00852)\n\n- [Survey :mag:] [ACM CSUR'23] [Deep Learning Workload Scheduling in GPU Datacenters: A Survey](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3638757)\n- [arxiv'23] Energy-Efficient GPU Clusters Scheduling for Deep Learning\n- [SC'23] [EasyScale: Accuracy-consistent Elastic Training for Deep Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3581784.3607054)\n- [ICPP'23] CoTrain: Efficient Scheduling for Large-Model Training upon GPU and CPU in Parallel\n- [ICPP'23] Embracing Uncertainty for Equity in Resource Allocation in ML Training\n- [SOSP'23] [Sia: Heterogeneity-aware, goodput-optimized ML-cluster scheduling](https:\u002F\u002Fwww.pdl.cmu.edu\u002FPDL-FTP\u002FBigLearning\u002Fsia_sosp23-final.pdf)\n- [NSDI'23] Shockwave: Proactive, Fair, and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning\n- [EuroSys'23] SiloD: A Co-design of Caching and Scheduling for Deep Learning Clusters [also in [1.2](#12-caching)]\n- [EuroSys'23] Lyra: Elastic Scheduling for Deep Learning Clusters\n- [EuroSys'23] [ElasticFlow: An Elastic Serverless Training Platform for Distributed Deep Learning](https:\u002F\u002Fgudiandian.github.io\u002Fattaches\u002Fasplos23\u002Fasplosb23main-p360.pdf)\n- [ASPLOS'23] Lucid: A Non-intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs\n\n- [arxiv'22] [Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07848)\n- [Survey :mag:] [arxiv, 2022] [Deep Learning Workload Scheduling in GPU Datacenters: Taxonomy, Challenges and Vision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11913)\n- [SoCC'22] [ESCHER: Expressive Scheduling with Ephemeral Resources](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3542929.3563498)\n- [NSDI'22] MLaaS in the wild: workload analysis and scheduling in large-scale heterogeneous GPU clusters (`PAI`)\n- [OSDI'22] Looking Beyond GPUs for DNN Scheduling on Multi-Tenant Clusters (`Synergy`)\n- [SIGCOMM'22] Multi-resource interleaving for deep learning training (`Muri`)\n\n- [MLSys'21] Wavelet: Efficient DNN Training with Tick-Tock Scheduling\n- [SoCC'21] Chronus: A Novel Deadline-aware Scheduler for Deep Learning Training Jobs\n- [SC'21] [Characterization and Prediction of Deep Learning Workloads in Large-Scale GPU Datacenters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.01313) (`Helios`)\n- [OSDI'21] Privacy Budget Scheduling (`DPF`)\n- [NSDI'21] Elastic Resource Sharing for Distributed Deep Learning (`AFS`)\n- [OSDI'21] Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning\n\n- [EuroSys'20] Balancing efficiency and fairness in heterogeneous GPU clusters for deep learning (`GandivaFair`)\n- [NSDI'20] Themis: Fair and Efficient GPU Cluster Scheduling\n- [OSDI'20] HiveD: Sharing a GPU Cluster for Deep Learning with Guarantees\n- [OSDI'20] Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads (`Gavel`)\n- [EuroSys'20] AlloX: Compute Allocation in Hybrid Clusters\n- [MLSys'20] Resource Elasticity in Distributed Deep Learning\n\n- [NSDI'19] Tiresias: A GPU Cluster Manager for Distributed Deep Learning\n- [ATC'19] Analysis of Large-Scale Multi-Tenant GPU Clusters for DNN Training Workloads (`Philly`)\n\n- [EuroSys'18] Optimus: an efficient dynamic resource scheduler for deep learning clusters\n- [OSDI'18] Gandiva: Introspective Cluster Scheduling for Deep Learning\n\n### Distributed training\n- [HPCA'26] [WATOS: Efficient LLM Training Strategies and Architecture Co-exploration for Wafer-scale Chip](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12279)\n- [ASPLOS'26] [SuperOffload: Unleashing the Power of Large-Scale LLM Training on Superchips](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21271)\n\n- [arxiv'25] [Diving into 3D Parallelism with Heterogeneous Spot Instance GPUs: Design and Implications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20953)\n- [arxiv'25] [SIGMA: An AI-Empowered Training Stack on Early-Life Hardware](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13488)\n- [arxiv'25] [BOOST: BOttleneck-Optimized Scalable Training Framework for Low-Rank Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12131)\n- [NeurIPS'25] [Synergistic Tensor and Pipeline Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.27257)\n- [arxiv'25] [AsyncHZP: Hierarchical ZeRO Parallelism with Asynchronous Scheduling for Scalable LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20111)\n- [arxiv'25] [PRISM: Probabilistic Runtime Insights and Scalable Performance Modeling for Large-Scale Distributed Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15596)\n- [NeurIPS'25] [First Attentions Last: Better Exploiting First Attentions for Efficient Transformer Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14614)\n- [arxiv'25] [A Flexible Programmable Pipeline Parallelism Framework for Efficient DNN Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05112)\n- [arxiv'25] [SlimPack: Fine-Grained Asymmetric Packing for Balanced and Efficient Variable-Length LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26246)\n- [arxiv'25] [AdaPtis: Reducing Pipeline Bubbles with Adaptive Pipeline Parallelism on Heterogeneous Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23722)\n- [arxiv'25] [HAPT: Heterogeneity-Aware Automated Parallel Training on Heterogeneous Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24859)\n- [arxiv'25] [Scaling Up Data Parallelism in Decentralized Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12213)\n- [arxiv'25] [Zorse: Optimizing LLM Training Efficiency on Heterogeneous GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10392)\n- [arxiv'25] [TrainVerify: Equivalence-Based Verification for Distributed LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15961)\n- [arxiv'25] [Cost-Efficient LLM Training with Lifetime-Aware Tensor Offloading via GPUDirect Storage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06472)\n- [arxiv'25] [ZenFlow: Enabling Stall-Free Offloading Training via Asynchronous Updates](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12242)\n- [arxiv'25] [Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02787)\n- [arxiv'25] [H2:Towards Efficient Large-Scale LLM Training on Hyper-Heterogeneous Cluster over 1,000 Chips](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17548)\n- [arxiv'25] [Balanced and Elastic End-to-end Training of Dynamic LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14864)\n- [arxiv'25] [ZenFlow: Enabling Stall-Free Offloading Training via Asynchronous Updates](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14468)\n- [arxiv'25] [SpanTrain: Highly Efficient Cross-domain Model Distributed Training System under Heterogeneous GPUs and Networks in CEE Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15536)\n- [arxiv'25] [Parallel Scaling Law for Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10475)\n- [arxiv'25] [Hetu v2: A General and Scalable Deep Learning System with Hierarchical and Heterogeneous Single Program Multiple Data Annotations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20490)\n- [arxiv'25] [Sailor: Automating Distributed Training over Dynamic, Heterogeneous, and Geo-distributed Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17096)\n- [arxiv'25] [PipeWeaver: Addressing Data Dynamicity in Large Multimodal Model Training with Dynamic Interleaved Pipeline](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14145)\n- [arxiv'25] [You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12471)\n- [arxiv'25] [WLB-LLM: Workload-Balanced 4D Parallelism for Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17924)\n- [arxiv'25] [Nonuniform-Tensor-Parallelism: Mitigating GPU failure impact for Scaled-up LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06095)\n- [arxiv'25] [CFP: Low-overhead Profiling-based Intra-operator Parallelism Generation by Preserving Communication-Free Structures](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00598)\n- [arxiv'25] [OrchMLLM: Orchestrate Multimodal Data with Batch Post-Balancing to Accelerate Multimodal Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.23830)\n- [arxiv'25] [Cornstarch: Distributed Multimodal Training Must Be Multimodality-Aware](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11367)\n- [arxiv'25] [PipeOffload: Improving Scalability of Pipeline Parallelism with Memory Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01328)\n- [arxiv'25] [AutoHete: An Automatic and Efficient Heterogeneous Training System for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01890)\n- [arxiv'25] [Astra: Efficient and Money-saving Automatic Parallel Strategies Search on Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13480)\n- [arxiv'25] [Scaling Inference-Efficient Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18107)\n- [arxiv'25] [MiniMax-01: Scaling Foundation Models with Lightning Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08313)\n\n- [SC'25] [Hypertron: Efficiently Scaling Large Models by Exploring High-Dimensional Parallelization Space](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759783)\n- [CLUSTER'25] [BMPipe: Bubble-Memory Co-Optimization Strategy Planner for Very-Large DNN Training](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186487\u002F2aCq0Sc5EDm)\n- [OSDI'25] [WLB-LLM: Workload-Balanced 4D Parallelism for Large Language Model Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fwang-zheng)\n- [ISCA'25] [FRED: A Wafer-scale Fabric for 3D Parallel DNN Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731055)\n- [ISCA'25] [MeshSlice: Efficient 2D Tensor Parallelism for Distributed DNN Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731077)\n- [ISCA'25] [Scaling Llama 3 Training with Efficient Parallelism Strategies](https:\u002F\u002Faisystemcodesign.github.io\u002Fpapers\u002FLlama3-ISCA25.pdf)\n- [ICML'25] [HALoS: Hierarchical Asynchronous Local SGD over Slow Networks for Geo-Distributed Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04531)\n- [MLSys'25] [Radius: Range-based Gradient Sparsity for Large Foundation Model Pre-training](https:\u002F\u002Fopenreview.net\u002Fforum?id=UCQPWBOWb6)\n- [ICLR'25] [TorchTitan: One-stop PyTorch native solution for production ready LLM pre-training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06511)\n- [INFOCOM'25] [Espresso: Cost-Efficient Large Model Training by Exploiting GPU Heterogeneity in the Cloud](https:\u002F\u002Ffangmingliu.github.io\u002Ffiles\u002Finfocom25-train.pdf)\n- [TPDS'25] [HpT: Hybrid Acceleration of Spatio-Temporal Attention Model Training on Heterogeneous Manycore Architectures](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10820024)\n- [ASPLOS'25] [GraphPipe: Improving Performance and Scalability of DNN Training with Graph Pipeline Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3669940.3707220)\n- [ASPLOS'25] [FlexSP: Accelerating Large Language Model Training via Flexible Sequence Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715998)\n- [ASPLOS'25] [Spindle: Efficient Distributed Training of Multi-Task Large Models via Wavefront Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715992)\n- [EuroSys'25] [JABAS: Joint Adaptive Batching and Automatic Scaling for DNN Training on Heterogeneous GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3696078)\n\n- [arxiv'24] [Automatically Planning Optimal Parallel Strategy for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00254)\n- [arxiv'24] [Adaptive Batch Size Schedules for Distributed Training of Language Models with Data and Model Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21124)\n- [arxiv'24] [Frenzy: A Memory-Aware Serverless LLM Training System for Heterogeneous GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14479)\n- [arxiv'24] [Echo: Simulating Distributed Training At Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12487)\n- [arxiv'24] [Scaling Deep Learning Training with MPMD Pipeline Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14374)\n- [arxiv'24] [Demystifying Workload Imbalances in Large Transformer Model Training over Variable-length Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07894)\n- [arxiv'24] [HETHUB: A Distributed Training System with Heterogeneous Cluster for Large-Scale Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16256)\n- [arxiv'24] [Data-Centric and Heterogeneity-Adaptive Sequence Parallelism for Efficient LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01523)\n- [arxiv'24] [Accelerating Large Language Model Training with 4D Parallelism and Memory Consumption Estimator](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06465)\n- [arxiv'24] [BitPipe: Bidirectional Interleaved Pipeline Parallelism for Accelerating Large Models Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19367)\n- [arxiv'24] [Cephalo: Harnessing Heterogeneous GPU Clusters for Training Transformer Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01075)\n- [arxiv'24] [SimpleFSDP: Simpler Fully Sharded Data Parallel with torch.compile](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00284)\n- [arxiv'24] [FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12707)\n- [arxiv'24] [PipeFill: Using GPUs During Bubbles in Pipeline-parallel LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07192)\n- [arxiv'24] [Poplar: Efficient Scaling of Distributed DNN Training on Heterogeneous GPU Clusters](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.12596)\n- [arxiv'24] [DistTrain: Addressing Model and Data Heterogeneity with Disaggregated Training for Multimodal Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04275)\n- [arxiv'24] [Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03365)\n- [arxiv'24] [FlashFlex: Accommodating Large Language Model Training over Heterogeneous Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01143v1)\n- [arxiv'24] [PARALLELGPUOS: A Concurrent OS-level GPU Checkpoint and Restore System using Validated Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12079)\n- [arxiv'24] [Unicron: Economizing Self-Healing LLM Training at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00134)\n- [arxiv'24] [TBA: Faster Large Language Model Training Using SSD-Based Activation Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.10013)\n- [arxiv'24] [Optimus: Accelerating Large-Scale Multi-Modal LLM Training by Bubble Exploitation](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.03505)\n- [Survey :mag:] [arxiv'24] [Efficient Training of Large Language Models on Distributed Infrastructures: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20018)\n- [arxiv'24] [LoongTrain: Efficient Training of Long-Sequence LLMs with Head-Context Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18485)\n- [arxiv'24] [PAFT: A Parallel Training Paradigm for Effective LLM Fine-Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17923)\n- [arxiv'24] [BurstAttention: An Efficient Distributed Attention Framework for Extremely Long Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09347)\n- [arxiv'24] [Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07816)\n- [arxiv'24] [Accelerating Heterogeneous Tensor Parallelism via Flexible Workload Control](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11469)\n- [arxiv'24] [GRAWA: Gradient-based Weighted Averaging for Distributed Training of Deep Learning Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04206)\n- [arxiv'24] [BitDelta: Your Fine-Tune May Only Be Worth One Bit](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10193)\n- [arxiv'24] [NutePrune: Efficient Progressive Pruning with Numerous Teachers for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09773)\n- [arxiv'24] [Accelerating Parallel Sampling of Diffusion Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09970)\n- [arxiv'24] [Training DNN Models over Heterogeneous Clusters with Optimal Performance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05302)\n- [arxiv'24] [Breaking MLPerf Training: A Case Study on Optimizing BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02447)\n- [arxiv'24] [LocMoE: A Low-overhead MoE for Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.13920)\n- [arxiv'24] [Re-evaluating the Memory-balanced Pipeline Parallelism: BPipe](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02088)\n- [arxiv'24] [InternEvo: Efficient Long-sequence Large Language Model Training via Hybrid Parallelism and Redundant Sharding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.09149)\n\n- [TPDS'24] [UMPIPE: Unequal Microbatches-Based Pipeline Parallelism for Deep Neural Network Training](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftd\u002F5555\u002F01\u002F10792656\u002F22AQNnaMR6U)\n- [Survey :mag:] [ACM CSUR'24] [Resource-efficient Algorithms and Systems of Foundation Models: A Survey](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3706418)\n- [SOSP'24] [Uncovering Nested Data Parallelism and Data Reuse in DNN Computation with FractalTensor](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3694715.3695961)\n- [SOSP'24] [Enabling Parallelism Hot Switching for Efficient Training of Large Language Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695969)\n- [TACO'24] [ATP: Achieving Throughput Peak for DNN Training via Smart GPU Memory Management](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3701996)\n- [NeurIPS'24] [Rethinking Memory and Communication Costs for Efficient Data Parallel Training of Large Language Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=4Un2TD9bNe)\n- [NeurIPS'24] [SpeedLoader: An I\u002FO efficient scheme for heterogeneous and distributed LLM operation](https:\u002F\u002Fopenreview.net\u002Fforum?id=Y2I0Fy4sm7)\n- [SC'24] [Accelerating Distributed DLRM Training with Optimized TT Decomposition and Micro-Batching](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100a776\u002F21HUVYHhG1O)\n- [SC'24] [Democratizing AI: Open-source Scalable LLM Training on GPU-based Supercomputers](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100a036\u002F21HUV5yQsyQ)\n- [SoCC'24] [Distributed training of large language models on AWS Trainium](https:\u002F\u002Fwww.amazon.science\u002Fpublications\u002Fdistributed-training-of-large-language-models-on-aws-trainium)\n- [TPDS'24] [AutoDDL: Automatic Distributed Deep Learning With Near-Optimal Bandwidth Cost](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06813)\n- [SOSP'24] Enabling Parallelism Hot Switching for Efficient Training of Large Language Models\n- [SOSP'24] [TENPLEX: Changing Resources of Deep Learning Jobs using Parallelizable Tensor Collections](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05181)\n- [ICPP'24] [AutoPipe: Automatic Configuration of Pipeline Parallelism in Shared GPU Cluster](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673047)\n- [COLM'24] [LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03294)\n- [OSDI'24] [nnScaler: Constraint-Guided Parallelization Plan Generation for Deep Learning Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Flin-zhiqi)\n  - [arxiv'23] [SuperScaler: Supporting Flexible DNN Parallelization via a Unified Abstraction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08984)\n- [ATC'24] [Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Parallelism](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fyuan)\n- [ATC'24] [Metis: Fast Automatic Distributed Training on Heterogeneous GPUs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fum)\n- [ATC'24] [FwdLLM: Efficient Federated Finetuning of Large Language Models with Perturbed Inferences](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fxu-mengwei)\n- [ATC'24] [OPER: Optimality-Guided Embedding Table Parallelization for Large-scale Recommendation Model](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fwang)\n- [HPDC'24] [DataStates-LLM: Lazy Asynchronous Checkpointing for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10707v1)\n- [ICML'24] [Scaling Beyond the GPU Memory Limit for Large Mixture-of-Experts Model Training](https:\u002F\u002Fopenreview.net\u002Fforum?id=uLpyWQPyF9)\n- [ICML'24] [Integrated Hardware Architecture and Device Placement Search](https:\u002F\u002Fopenreview.net\u002Fpdf?id=ucl3B05EsX)\n- [MLSys'24] [DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F45c1f6a8cbf2da59ebf2c802b4f742cd-Paper-Conference.pdf)\n- [MLSys'24] [Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F339caf45a6fa281cae8adc6465343464-Paper-Conference.pdf)\n- [MobiCom'24] [Asteroid: Resource-Efficient Hybrid Pipeline Parallelism for Collaborative DNN Training on Heterogeneous Edge Devices](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3636534.3649363)\n- [EuroSys'24] [DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3627703.3629585)\n- [EuroSys'24] [ScheMoE: An Extensible Mixture-of-Experts Distributed Training System with Tasks Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3650083)\n- [EuroMLSys@EuroSys'24] [ML Training with Cloud GPU Shortages: Is Cross-Region the Answer?](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3642970.3655843)\n- [ASPLOS'24] [AdaPipe: Optimizing Pipeline Parallelism with Adaptive Recomputation and Partitioning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651359)\n- [ASPLOS'24] [PrimePar: Efficient Spatial-temporal Tensor Partitioning for Large Transformer Model Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651357)\n- [EuroSys'24] [Aceso: Efficient Parallel DNN Training through Iterative Bottleneck Alleviation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3629554)\n- [NSDI'24] [MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fjiang-ziheng)\n- [NSDI'24] [DISTMM: Accelerating Distributed Multi-modal Model Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fhuang)\n- [NSDI'24] Accelerating Neural Recommendation Training with Embedding Scheduling\n- [NSDI'24] Resiliency at Scale: Managing Google’s TPUv4 Machine Learning Supercomputer\n- [NSDI'24] QuickUpdate: a Real-Time Personalization System for Large-Scale Recommendation Models\n- [NSDI'24] [Scaling Large Language Model Training to More Than 10,000 GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15627)\n- [TKDE'24] [Improving Automatic Parallel Training via Balanced Memory Workload Optimization](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10449463)\n  - extended version of Galvatron (VLDB'23)\n  - arxiv version (2023): [link](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02031)\n- [ICLR'24] [Zero Bubble (Almost) Pipeline Parallelism](https:\u002F\u002Fopenreview.net\u002Fforum?id=tuzTN0eIO5)\n- [ICLR'24] [CO2: Efficient Distributed Training with Full Communication-Computation Overlap](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZO5cn4IfaN)\n  - [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16265)\n- [AAMAS'24] [Holonic Learning: A Flexible Agent-based Distributed Machine Learning Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10839)\n- [VLDB'24] [Saturn: An Optimized Data System for Multi-Large-Model Deep Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01226)\n- [HPCA'24] [Tessel: Boosting Distributed Execution of Large DNN Models via Flexible Schedule Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15269)\n- [NSDI'24] Parcae: Proactive, Liveput-Optimized DNN Training on Preemptible Instances\n- [EuroSys'24] [HAP: SPMD DNN Training on Heterogeneous GPU Clusters with Automated Program Synthesis](https:\u002F\u002Fi.cs.hku.hk\u002F~cwu\u002Fpapers\u002Fswzhang-eurosys24.pdf)\n\n- [arxiv'23] [vTrain: A Simulation Framework for Evaluating Cost-effective and Compute-optimal Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12391)\n- [arxiv'23] [ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02515)\n- [arxiv'23] [FlexModel: A Framework for Interpretability of Distributed Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03140)\n- [arxiv'23] [Holmes: Towards Distributed Training Across Clusters with Heterogeneous NIC Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03549)\n- [arxiv'23] [RTP: Rethinking Tensor Parallelism with Memory Deduplication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.01635)\n- [arxiv'23] [FP8-LM: Training FP8 Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18313)\n- [arxiv'23] [Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU\u002FTPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16355)\n- [arxiv'23] [DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14509)\n- [arxiv'23] [A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06497)\n- [arxiv'23] [FLM-101B: An Open LLM and How to Train It with $100K Budget](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.03852.pdf)\n- [arxiv'23] [UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic Programming](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16375)\n- [arxiv'23] Modeling Parallel Programs using Large Language Models\n- [arxiv'23] [Proteus: Simulating the Performance of Distributed DNN Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02267)\n- [arxiv'23] Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training\n- [arxiv'23] Decoupled Model Schedule for Deep Learning Training\n- [arxiv'23] RAF: Holistic Compilation for Deep Learning Model Training\n- [arxiv'23] Ada-Grouper: Accelerating Pipeline Parallelism in Preempted Network by Adaptive Group-Scheduling for Micro-Batches\n- [arxiv'23] Does compressing activations help model parallel training?\n- [arxiv'23] Colossal-Auto: Unified Automation of Parallelization and Activation Checkpoint for Large-scale Models\n- [arxiv'23] Scaling Vision Transformers to 22 Billion Parameters\n- [arxiv'23] Auto-Parallelizing Large Models with Rhino: A Systematic Approach on Production AI Platform\n- [arxiv'23] TAP: Accelerating Large-Scale DNN Training Through Tensor Automatic Parallelisation\n- [arxiv'23] [SuperScaler: Supporting Flexible DNN Parallelization via a Unified Abstraction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08984)\n- [arxiv'23] ATP: Adaptive Tensor Parallelism for Foundation Models\n\n- [ICPP'23] Mercury: Fast and Optimal Device Placement for Large Deep Learning Models\n- [IPDPS'23] [MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177396)\n- [CLUSTER'23] Prophet: Fine-grained Load Balancing for Parallel Training of Large-scale MoE Models\n- [NeurIPS'23] [ASPEN: Breaking Operator Barriers for Efficient Parallelization of Deep Neural Networks](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Fd899a31938c7838965b589d9b14a5ca6-Abstract-Conference.html)\n- [NeurIPS'23] [DeepPCR: Parallelizing Sequential Operations in Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.16318)\n- [DAC'23] MixPipe: Efficient Bidirectional Pipeline Parallelism for Training Large-Scale Models\n- [SC'23] [Hanayo: Harnessing Wave-like Pipeline Parallelism for Enhanced Large Model Training Efficiency](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.15762.pdf)\n- [SOSP'23] PIT: Optimization of Dynamic Sparse Deep Learning Models via Permutation Invariant Transformation\n- [SOSP'23] [Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3600006.3613152)\n- [TPDS'23] [Fold3D: Rethinking and Parallelizing Computational and Communicational Tasks in the Training of Large DNN Models](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10050126)\n- [MICRO'23] [Grape: Practical and Efficient Graphed Execution for Dynamic Deep Neural Networks on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3613424.3614248)\n- [HPCA'23] [Phloem: Automatic Acceleration of Irregular Applications with Fine-Grain Pipeline Parallelism](https:\u002F\u002Fpeople.csail.mit.edu\u002Fqmn\u002Fpapers\u002Fnguyen_phloem_hpca_2023_preprint.pdf)\n- [ACL'23] [Sequence Parallelism: Long Sequence Training from System Perspective](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.134\u002F)\n- [CCGrid'23] A Deep Learning Pipeline Parallel Optimization Method\n- [OSDI'23] MGG: Accelerating Graph Neural Networks with Fine-Grained Intra-Kernel Communication-Computation Pipelining on Multi-GPU Platforms\n- [ATC'23] Accelerating Distributed MoE Training and Inference with Lina\n- [ATC'23] SmartMoE: Efficiently Training Sparsely-Activated Models through Combining Offline and Online Parallelization\n- [ATC'23] MSRL: Distributed Reinforcement Learning with Dataflow Fragments\n- [Survey :mag:] [TPDS'23] A Survey on Auto-Parallelism of Large-Scale Deep Learning Training\n- [ICML'23] [SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11913)\n- [ICML'23] BPipe: Memory-Balanced Pipeline Parallelism for Training Large Language Models\n- [ICS'23] A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training\n- [NSDI'23] TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Training Jobs\n- [NSDI'23] Bamboo: Making Preemptible Instances Resilient for Affordable Training of Large DNNs\n- [NSDI'23] [ARK: GPU-driven Code Execution for Distributed Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi23\u002Fpresentation\u002Fhwang)\n- [SIGMOD'23] FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement\n- [MLSys'23] On Optimizing the Communication of Model Parallelism\n- [MLSys'23] [MegaBlocks: Efficient Sparse Training with Mixture-of-Experts](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F5a54f79333768effe7e8927bcccffe40-Abstract-mlsys2023.html)\n- [MLSys'23] [Tutel: Adaptive Mixture-of-Experts at Scale](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F9412531719be7ccf755c4ff98d0969dc-Abstract-mlsys2023.html)\n- [TPDS'23] Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models\n- [PPoPP'23] Elastic Averaging for Efficient Pipelined DNN Training\n- [PPoPP'23] Efficient All-Reduce for Distributed DNN Training in Optical Interconnect Systems\n- [VLDB'23] MiCS: Near-linear Scaling for Training Gigantic Model on Public Cloud\n- [VLDB'23] Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism\n- [ASPLOS'23] Mobius: Fine Tuning Large-Scale Models on Commodity GPU Servers\n- [ASPLOS'23] Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression\n\n- [arxiv'22] Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training\n- [arxiv'22] Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model\n- [ICPP'22] [Tesseract: Parallelize the Tensor Parallelism Efficiently](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.14500)\n- [MLSys'22] [Synthesizing optimal parallelism placement and reduction strategies on hierarchical systems for deep learning](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Ff0f9e98bc2e2f0abc3e315eaa0d808fc-Abstract.html)\n  - [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10548)\n- [NeurIPS'22] Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees\n- [SoCC'22] Accelerating Large-Scale Distributed Neural Network Training with SPMD Parallelism\n- [MLSys'22] Pathways: Asynchronous distributed dataflow for ML\n- [MLSys'22] [SRIFTY: Swift and Thrifty Distributed Neural Network Training on the Cloud](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F0cafb7890f6a7d4de65507d5bb7e0187-Abstract.html)\n- [MLSys'22] [Efficient Strong Scaling Through Burst Parallel Training](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Fb99e69074b2fa1d8c8fe0d5b60e19397-Abstract.html)\n- [EuroSys'22] Varuna: scalable, low-cost training of massive deep learning models\n- [ATC'22] Whale: Efficient Giant Model Training over Heterogeneous GPUs\n- [NeurIPS'22] [AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07297)\n- [PPoPP'22] [FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3503221.3508418)\n- [ICML'22] [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05596)\n- [ICML'22] [GLaM: Efficient Scaling of Language Models with Mixture-of-Experts](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fdu22c\u002Fdu22c.pdf)\n- [HPDC'22] Hare: Exploiting Inter-job and Intra-job Parallelism of Distributed Machine Learning on Heterogeneous GPUs\n- [OSDI'22] Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning\n- [NSDI'22] Accelerating Collective Communication in Data Parallel Training across Deep Learning Frameworks\n\n- [arxiv'21] Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training\n- [arxiv'21] GSPMD: General and Scalable Parallelization for ML Computation Graphs\n- [JMLR'21] [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03961)\n- [TPDS'21] TensorOpt: Exploring the Tradeoffs in Distributed DNN Training With Auto-Parallelism\n- [ATC'21] Fine-tuning giant neural networks on commodity hardware with automatic pipeline model parallelism\n- [SIGMOD'21] Heterogeneity-Aware Distributed Machine Learning Training via Partial Reduce [also in [2.10](#210-communication-optimization)]\n- [MLSys'21] PipeMare: Asynchronous Pipeline Parallel DNN Training\n- [ICLR'21] GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding\n- [NeurIPS'21] Piper: Multidimensional Planner for DNN Parallelization\n- [ICML'21] Memory-Efficient Pipeline-Parallel DNN Training\n- [ICML'21] TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models\n- [ICML'21] PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models\n- [SC'21] Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines\n- [SC'21] Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM (`PTD-P` or `Megatron-LM v2`)\n- [FAST'21] Behemoth: A Flash-centric Training Accelerator for Extreme-scale DNNs\n- [PPoPP'21] DAPPLE: a pipelined data parallel approach for training large models\n- [VLDB'21] Distributed Deep Learning on Data Systems: A Comparative Analysis of Approaches\n\n- [HPCA'20] AccPar: Tensor Partitioning for Heterogeneous Deep Learning Accelerators\n- [NeurIPS'20] Efficient Algorithms for Device Placement of DNN Graph Operators\n- [arxiv'20] Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism\n- [KDD'20 Tutorial] DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters\n- [VLDB'20] PyTorch Distributed: Experiences on Accelerating Data Parallel Training\n- [OSDI'20] A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU\u002FCPU Clusters (`BytePS`)\n- [SOSP'19] PipeDream: Generalized Pipeline Parallelism for DNN Training\n- [NeurIPS'20] [Language Models are Few-Shot Learners](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2020\u002Fhash\u002F1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html?utm_medium=email&utm_source=transaction) [**From OpenAI**]\n  - [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.14165)\n- [arxiv'20] [Scaling Laws for Neural Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.08361) [**From OpenAI**]\n\n- [HPCA'19] [HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02067)\n- [IEEE MICRO'19] [Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.13257)\n- [MLSys'19] [Beyond data and model parallelism for deep neural networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.05358) (`FlexFlow`)\n- [MLSys'19] TicTac: Accelerating Distributed Deep Learning with Communication Scheduling\n- [EuroSys'19] Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks\n- [EuroSys'19] Supporting Very Large Models using Automatic Dataflow Graph Partitioning (`Tofu`)\n- [SOSP'19] A Generic Communication Scheduler for Distributed DNN Training Acceleration\n- [NeurIPS'19] Mesh-TensorFlow: Deep Learning for Supercomputers\n- [NeurIPS'19] GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism\n- [ICML'18] Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks\n\n- [Survey :mag:] [IJCAI'22] Survey on Effcient Training of Large Neural Networks\n- [Survey :mag:] [ACM CSUR'19] Demystifying Parallel and Distributed Deep Learning\n- [Survey :mag:] [ACM CSUR'19] Scalable Deep Learning on Distributed Infrastructures: Challenges, Techniques, and Tools\n\n### AutoML\n- [OSDI'23] Hydro: Surrogate-Based Hyperparameter Tuning Service in Datacenters\n- [NSDI'23] ModelKeeper: Accelerating DNN Training via Automated Training Warmup\n- [OSDI'20] Retiarii: A Deep Learning Exploratory-Training Framework\n\n### GNN training system\n> For comprehensive list of GNN systems papers, refer to [https:\u002F\u002Fgithub.com\u002Fchwan1016\u002Fawesome-gnn-systems](https:\u002F\u002Fgithub.com\u002Fchwan1016\u002Fawesome-gnn-systems).\n\n- [PPoPP'26] [TAC: Cache-Based System for Accelerating Billion-Scale GNN Training on Multi-GPU Platform](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786460)\n- [PPoPP'26] [ElasGNN: An Elastic Training Framework for Distributed GNN Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786440)\n- [SC'25] [Plexus: Taming Billion-edge Graphs with 3D Parallel Full-graph GNN Training](https:\u002F\u002Fpssg.cs.umd.edu\u002Fassets\u002Fpapers\u002F2025-11-plexus-sc.pdf)\n- [SIGMOD'25] [NeutronHeter: Optimizing Distributed Graph Neural Network Training for Heterogeneous Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3749175)\n- [ICDE'25] [CaliEX: A Disk-Based Large-Scale GNN Training System with Joint Design of Caching and Execution](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficde\u002F2025\u002F360300c908\u002F26FZBj8WvyU)\n- [arxiv'25] [Plexus: Taming Billion-edge Graphs with 3D Parallel GNN Training](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.04083)\n- [HPCA'25] [Mithril: A Scalable System for Deep GNN Training](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fhpca\u002F2025\u002F064700b052\u002F25Ko4zIl7So)\n- [arxiv'25] [Armada: Memory-Efficient Distributed Training of Large-Scale Graph Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17846)\n- [VLDB'25] [NeutronTP: Load-Balanced Distributed Full-Graph GNN Training with Tensor Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20379)\n- [arxiv'24] [FastGL: A GPU-Efficient Framework for Accelerating Sampling-Based GNN Training at Large Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14939)\n- [ICPP'24] [GNNDrive: Reducing Memory Contention and I\u002FO Congestion for Disk-based GNN Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673063)\n- [VLDB'24] [NeutronStream: A Dynamic GNN Training Framework with Sliding Window for Graph Streams](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02473)\n- [arxiv'23] [ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.07482)\n- [arxiv'23] [Helios: An Efficient Out-of-core GNN Training System on Terabyte-scale Graphs with In-memory Performance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00837)\n- [arxiv'23] [GNNPipe: Accelerating Distributed Full-Graph GNN Training with Pipelined Model Parallelism](https:\u002F\u002Fbrowse.arxiv.org\u002Fpdf\u002F2308.10087.pdf)\n- [MLSys'23] [Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F0ea77501c3f6bcba97e082d03a40646d-Abstract-mlsys2023.html)\n- [SIGMOD'23] DUCATI: A Dual-Cache Training System for Graph Neural Networks on Giant Graphs with the GPU\n- [OSDI'23] [MGG: Accelerating Graph Neural Networks with Fine-Grained Intra-Kernel Communication-Computation Pipelining on Multi-GPU Platforms](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi23-wang-yuke.pdf)\n- [EuroSys'23] [MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02365)\n- [KDD'22] [Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Heterogeneous Graphs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3534678.3539177)\n- [VLDB'22] [TGL: a general framework for temporal GNN training on billion-scale graphs](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol15\u002Fp1572-zhou.pdf)\n- [OSDI'21] [P3: Distributed Deep Graph Learning at Scale](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi21\u002Fpresentation\u002Fgandhi)\n\n## Inference System\n- [MLSys'26] [Meeting SLOs, Slashing Hours: Automated Enterprise LLM Optimization with OptiKIT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20408)\n- [arxiv'26] [Laser: Unlocking Layer-Level Scheduling for Efficient Multi-SLO LLM Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786413)\n- [arxiv'26] [Speculative Decoding: Performance or Illusion?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.11580)\n- [arixv'26] [Plan, Verify and Fill: A Structured Parallel Decoding Approach for Diffusion Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12247)\n- [arxiv'26] [PLA-Serve: A Prefill-Length-Aware LLM Serving System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.11589)\n- [PPoPP'26] [Accelerating Sparse Transformer Inference on GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06095)\n- [VLDB'26] [ORBITFLOW: SLO-Aware Long-Context LLM Serving with Fine-Grained KV Cache Reconfiguration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10729)\n- [IEEE Computer'26] [Challenges and Research Directions for Large Language Model Inference Hardware](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05047)\n- [arxiv'26] [AIConfigurator: Lightning-Fast Configuration Optimization for Multi-Framework LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06288)\n- [arxiv'26] [FlashInfer-Bench: Building the Virtuous Cycle for AI-driven LLM Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00227)\n- [NSDI'26] [FlexLLM: Token-Level Co-Serving of LLM Inference and Finetuning with SLO Guarantees](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Foliaro)\n- [NSDI'26] [FastServe: Iteration-Level Preemptive Scheduling for Large Language Model Inference](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Fwu-bingyang)\n- [NSDI'26] [HydraServe: Minimizing Cold Start Latency for Serverless LLM Serving in Public Clouds](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Flou)\n- [FPGA'26] [CXL-SpecKV: A Disaggregated FPGA Speculative KV-Cache for Datacenter LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11920)\n- [ASPLOS'26] [XY-Serve: End-to-End Versatile Production Serving for Dynamic LLM Workloads](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3760250.3762228)\n- [AAAI'26] [Lethe: Layer- and Time-Adaptive KV Cache Pruning for Reasoning-Intensive LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06029)\n- [EuroSys'26] [FlexPipe: Adapting Dynamic LLM Serving Through Inflight Pipeline Refactoring in Fragmented Serverless Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11938)\n- [EuroSys'26] [KunServe: Parameter-centric Memory Management for Efficient Memory Overloading Handling in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18169)\n- [EuroSys'26] [TokenFlow: Responsive LLM Text Streaming Serving under Request Burst via Preemptive Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02758)\n\n- [SoCC'25] Multiplexed Heterogeneous LLM Serving via Stage-Aligned Parallelism\n- [arxiv'25] [TraCT: Disaggregated LLM Serving with CXL Shared Memory KV Cache at Rack-Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18194)\n- [arxiv'25] [L4: Low-Latency and Load-Balanced LLM Serving via Length-Aware Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19179)\n- [arxiv'25] [Efficient Multi-Adapter LLM Serving via Cross-Model KV-Cache Reuse with Activated LoRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.17910)\n- [arxiv'25] [EVICPRESS: Joint KV-Cache Compression and Eviction for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14946)\n- [arxiv'25] [Staggered Batch Scheduling: Co-optimizing Time-to-First-Token and Throughput for High-Efficiency LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.16134)\n- [arxiv'25] [MultiPath Transfer Engine: Breaking GPU and Host-Memory Bandwidth Bottlenecks in LLM Services](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.16056)\n- [arxiv'25] [PROSERVE: Unified Multi-Priority Request Scheduling for LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12928)\n- [arxiv'25] [xGR: Efficient Generative Recommendation Serving at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11529)\n- [arxiv'25] [ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13586)\n- [arxiv'25] [TokenScale: Timely and Accurate Autoscaling for Disaggregated LLM Serving with Token Velocity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03416)\n- [arxiv'25] [AugServe: Adaptive Request Scheduling for Augmented Large Language Model Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04013)\n- [arxiv'25] [Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01278)\n- [arxiv'25] [SIMPLE: Disaggregating Sampling from GPU Inference into a Decision Plane for Faster Distributed LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00719)\n- [arxiv'25] [OmniInfer: System-Wide Acceleration Techniques for Optimizing LLM Serving Throughput and Latency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22481)\n- [arxiv'25] [OOCO: Latency-disaggregated Architecture for Online-Offline Co-locate LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21862)\n- [arxiv'25] [Serving Heterogeneous LoRA Adapters in Distributed LLM Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22880)\n- [arxiv'25] [Harli: SLO-Aware Co-location of LLM Inference and PEFT-based Finetuning on Model-as-a-Service Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11729)\n- [arxiv'25] [CLO: Efficient LLM Inference System with CPU-Light KVCache Offloading via Algorithm-System Co-Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14510)\n- [arxiv'25] [FengHuang: Next-Generation Memory Orchestration for AI Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10753)\n- [arxiv'25] [FLUXSERVE: Serving a Variety of LLMs for Best-Effort Efficiency via Dynamic Temperature-Aware Multiplexing](https:\u002F\u002Fopenreview.net\u002Fpdf?id=I1CGMNNX5i)\n- [arxiv'25] [Synera: Synergistic LLM Serving across Device and Cloud at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07423)\n- [arxiv'25] [DuetServe: Harmonizing Prefill and Decode for LLM Serving via Adaptive GPU Multiplexing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.04791)\n- [Middleware'25] [Argus: Quality-Aware High-Throughput Text-to-Image Inference Serving System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06724)\n- [arxiv'25] [From Models to Operators: Rethinking Autoscaling Granularity for Large Generative Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02248)\n- [arxiv'25] [TapOut: A Bandit-Based Approach to Dynamic Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02017)\n- [NeurIPS'25] [SuffixDecoding: Extreme Speculative Decoding for Emerging AI Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04975)\n- [arxiv'25] [FREESH: Fair, Resource- and Energy-Efficient Scheduling for LLM Serving on Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00807)\n- [EMNLP'25] [Distributed LLM Serving on Consumer-Grade GPUs by Reconciling Computation and Communication](https:\u002F\u002Faclanthology.org\u002F2025.findings-emnlp.957.pdf)\n- [arxiv'25] [Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18245)\n- [MICRO'25] [MX+: Pushing the Limits of Microscaling Formats for Efficient Large Language Model Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756118)\n- [MICRO'25] [Kelle: Co-design KV Caching and eDRAM for Efficient LLM Serving in Edge Computing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3725843.3756071)\n- [arxiv'25] [SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08544)\n- [arxiv'25] [From Tokens to Layers: Redefining Stall-Free Scheduling for LLM Serving with Layered Prefill](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08055)\n- [CLUSTER'25] [Scalable and Fast Inference Serving via Hybrid Communication Scheduling on Heterogeneous Networks](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186468\u002F2aCq2GqaO6Q)\n- [CLUSTER'25] [Rock: Serving Multimodal Models in Cloud with Heterogeneous-Aware Resource Orchestration for Thousands of LoRA Adapters](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186463\u002F2aCq3B9XD6o)\n- [arxiv'25] [TridentServe: A Stage-level Serving System for Diffusion Pipelines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02838)\n- [arxiv'25] [MACE: A Hybrid LLM Serving System with Colocated SLO-aware Continuous Retraining Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.03283)\n- [Survey :mag:] [ACM CSUR'25] [Towards Efficient Generative Large Language Model Serving: A Survey from Algorithms to Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3754448)\n- [SOSP'25] [Aegaeon: Effective GPU Pooling for Concurrent LLM Serving on the Market](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764815)\n- [SOSP'25] [IC-Cache: Efficient Large Language Model Serving via In-context Caching](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764829)\n- [SOSP'25] [DiffKV: Differentiated Memory Management for Large Language Models with Parallel KV Compaction](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764810)\n- [arxiv'25] [TetriServe: Efficient DiT Serving for Heterogeneous Image Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01565)\n- [arxiv'25] [Parallax: Efficient LLM Inference Service over Decentralized Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26182)\n- [arxiv'25] [RServe: Overlapping Encoding and Prefill for Efficient LMM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24381)\n- [arxiv'25] [Cronus: Efficient LLM inference on Heterogeneous GPU Clusters via Partially Disaggregated Prefill](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17357)\n- [arxiv'25] [Shift Parallelism: Low-Latency, High-Throughput LLM Inference for Dynamic Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.16495)\n- [COLM'25] [OverFill: Two-Stage Models for Efficient Language Model Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08446)\n- [ACM MM'25] [TinyServe: Query-Aware Cache Selection for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12211)\n- [arxiv'25] [Scaling Up Throughput-oriented LLM Inference Applications on Heterogeneous Opportunistic GPU Clusters with Pervasive Context Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13201)\n- [SC'25] [Hetis: Serving LLMs in Heterogeneous GPU Clusters with Fine-grained and Dynamic Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08309)\n- [arxiv'25] [FineServe: Precision-Aware KV Slab and Two-Level Scheduling for Heterogeneous Precision LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.06261)\n- [arxiv'25] [AdaptCache: KV Cache Native Storage Hierarchy for Low-Delay and High-Quality Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00105)\n- [arxiv'25] [Predictable LLM Serving on GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20274)\n- [SIGCOMM'25] [SCX: Stateless KV-Cache Encoding for Cloud-Scale Confidential Transformer Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3718958.3750509)\n- [arxiv'25] [Taming the Chaos: Coordinated Autoscaling for Heterogeneous and Disaggregated LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19559)\n- [arxiv'25] [Rethinking Caching for LLM Serving Systems: Beyond Traditional Heuristics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18736)\n- [OSDI'25] [BlitzScale: Fast and Live Large Model Autoscaling with O(1) Host Caching](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fzhang-dingyan)\n- [OSDI'25] [WaferLLM: Large Language Model Inference at Wafer Scale](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fhe)\n- [OSDI'25] [NanoFlow: Towards Optimal Large Language Model Serving Throughput](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fzhu-kan)\n- [arxiv'25] [TokenLake: A Unified Segment-level Prefix Cache Pool for Fine-grained Elastic Long-Context LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17219)\n- [arxiv'25] [HyperFlexis: Joint Design of Algorithms and Systems for Multi-SLO Serving and Fast Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.15919)\n- [arxiv'25] [Equinox: Holistic Fair Scheduling in Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.16646)\n- [arxiv'25] [Efficient Mixed-Precision Large Language Model Inference with TurboMind](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.15601)\n- [ICML'25] [Packrat: Automatic Reconfiguration for Latency Minimization in CPU-based DNN Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18174)\n- [arxiv'25] [Kairos: Low-latency Multi-Agent Serving with Shared LLMs and Excessive Loads in the Public Cloud](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06948)\n- [arxiv'25] [Block: Balancing Load in LLM Serving with Context, Knowledge and Predictive Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03611)\n- [arxiv'25] [Prefill-Decode Aggregation or Disaggregation? Unifying Both for Goodput-Optimized LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.01989)\n- [arxiv'25] [Unlock the Potential of Fine-grained LLM Serving via Dynamic Module Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.18006)\n- [ACL'25] [MiniKV: Pushing the Limits of 2-Bit KV Cache via Compression and System Co-Design for Efficient Long Context Inference](https:\u002F\u002Faclanthology.org\u002F2025.findings-acl.952.pdf)\n- [ACL'25] [StitchLLM: Serving LLMs, One Block at a Time](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.1305.pdf)\n- [ACL'25] [SPECTRA: Faster Large Language Model Inference with Optimized Internal and External Speculation](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.685.pdf)\n- [arxiv'25] [Helix Parallelism: Rethinking Sharding Strategies for Interactive Multi-Million-Token LLM Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07120)\n- [arxiv'25] [Proactive Intra-GPU Disaggregation of Prefill and Decode in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06608)\n- [arxiv'25] [MIRAGE: KV Cache Optimization through Parameter Remapping for Multi-tenant LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.11507)\n- [CODEML @ ICML'25] [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=HpqH0JakHf)\n- [arxiv'25] [On Evaluating Performance of LLM Inference Serving Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09019)\n- [arxiv'25] [PrefillOnly: An Inference Engine for Prefill-only Workloads in Large Language Model Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07203)\n- [ICML'25] [EPIC: Efficient Position-Independent Caching for Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15332)\n- [arxiv'25] [SiPipe: Bridging the CPU-GPU Utilization Gap for Efficient Pipeline-Parallel LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22033)\n- [arxiv'25] [Utility-Driven Speculative Decoding for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20675)\n- [ATC'25] [DEEPSERVE: Serverless Large Language Model Serving at Scale](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fhu-junhao)\n- [ISCA'25] [WindServe: Efficient Phase-Disaggregated LLM Serving with Stream-based Dynamic Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3730999)\n- [ISCA'25] [Hybe: GPU-NPU Hybrid System for Efficient LLM Inference with Million-Token Context Window](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3731051)\n- [ICLR'25] [TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05076)\n- [arxiv'25] [Cascadia: A Cascade Serving System for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04203)\n- [arxiv'25] [Efficient and Workload-Aware LLM Serving via Runtime Layer Swapping and KV Cache Resizing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02006)\n- [arxiv'25] [SkyLB: A Locality-Aware Cross-Region Load Balancer for LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24095)\n- [arxiv'25] [EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23970)\n- [arxiv'25] [SCORPIO: Serving the Right Requests at the Right Time for Heterogeneous SLOs in LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23022)\n- [arxiv'25] [HybridServe: Efficient Serving of Large AI Models with Confidence-Based Cascade Routing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12566)\n- [arxiv'25] [ServerlessLoRA: Minimizing Latency and Cost in Serverless Inference for LoRA-Based LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14468)\n- [arxiv'25] [TokenWeave: Efficient Compute-Communication Overlap for Distributed LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11329)\n- [arxiv'25] [Tilus: A Virtual Machine for Arbitrary Low-Precision GPGPU Computation in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12984)\n- [OSDI'25] [Clover: Exploiting Intra-device Parallelism for High Throughput Large Language Model Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fzhu-kan)\n- [arxiv'25] [ServeGen: Workload Characterization and Generation of Large Language Model Serving in Production](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09999)\n- [arxiv'25] [ELIS: Efficient LLM Iterative Scheduling System with Response Length Predictor](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.09142)\n- [arxiv'25] [Improving the Serving Performance of Multi-LoRA Large Language Models via Efficient LoRA and KV Cache Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03756)\n- [arxiv'25] [Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.04021)\n- [arxiv'25] [Tempo: Application-aware LLM Serving with Mixed SLO Requirements](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20068)\n- [arxiv'25] [Ascendra: Dynamic Request Prioritization for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20828)\n- [arxiv'25] [Medha: Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17264)\n- [arxiv'25] [Streaming, Fast and Slow: Cognitive Load-Aware Streaming for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17999)\n- [arxiv'25] [Bullet: Boosting GPU Utilization for LLM Serving via Dynamic Spatial-Temporal Orchestration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19516)\n- [Survey :mag:] [arxiv'25] [Taming the Titans: A Survey of Efficient LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19720)\n- [MLSys'25] [SOLA: Optimizing SLO Attainment for Large Language Model Serving with State-Aware Scheduling](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3231)\n- [MLSys'25] [Marconi: Prefix Caching for the Era of Hybrid LLMs](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3260)\n- [arxiv'25] [PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18583)\n- [arxiv'25] [Circinus: Efficient Query Planner for Compound ML Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16397)\n- [arxiv'25] [HPU: High-Bandwidth Processing Unit for Scalable, Cost-effective LLM Inference via GPU Co-processing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16112)\n- [Mobicom'25] [D2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15299)\n- [arxiv'25] [SeaLLM: Service-Aware and Latency-Optimized Resource Sharing for Large Language Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15720)\n- [arxiv'25] [gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14775)\n- [arxiv'25] [Optimizing SLO-oriented LLM Serving with PD-Multiplexing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14489)\n- [arxiv'25] [SLO-Aware Scheduling for Large Language Model Inferences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14966)\n- [arxiv'25] [Cost-Efficient LLM Serving in the Cloud: VM Selection with KV Cache Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11816)\n- [ISPASS'25] [Characterizing and Optimizing LLM Inference Workloads on CPU-GPU Coupled Architectures](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11750)\n- [arxiv'25] [HELIOS: Adaptive Model And Early-Exit Selection for Efficient LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10724)\n- [arxiv'25] [DynaServe: Unified and Elastic Tandem-Style Execution for Dynamic Disaggregated LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09285)\n- [arxiv'25] [Efficient LLM Serving on Hybrid Real-time and Best-effort Requests](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09590)\n- [arxiv'25] [SLOs-Serve: Optimized Serving of Multi-SLO LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08784)\n- [arxiv'25] [Understanding and Optimizing Multi-Stage AI Inference Pipelines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09775)\n- [arxiv'24] [Fast and Live Model Auto Scaling with O(1) Host Caching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17246)\n- [SIGMOD'25] [Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07494)\n- [EuroMLSys'25] [Performance Aware LLM Load Balancer for Mixed Workloads](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3721146.3721947)\n- [MLSys'25] [Rethinking Key-Value Cache Compression Techniques for Large Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.24000)\n- [arxiv'25] [WaferLLM: A Wafer-Scale LLM Inference System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04563)\n- [HPCA'25] [PAISE: PIM-Accelerated Inference Scheduling Engine for Transformer-based LLM](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10946299)\n- [HPCA'25] throttLL'eM: Predictive GPU Throttling for Energy Efficient LLM Inference Serving\n- [arxiv'25] [Niyama : Breaking the Silos of LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22562)\n- [ASPLOS'25] [Aqua: Network-Accelerated Memory Offloading for LLMs in Scale-Up GPU Domains](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715983)\n- [ASPLOS'25] [Past-Future Scheduler for LLM Serving under SLA Guarantees](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716011)\n- [ASPLOS'25] [Accelerating LLM Serving for Multi-turn Dialogues with Efficient Resource Management](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716245)\n- [EuroSys'25] [SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3717481)\n- [EuroSys'25] [Multiplexing Dynamic Deep Learning Workloads with SLO-awareness in GPU Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3696074)\n- [arxiv'25] [Injecting Adrenaline into LLM Serving: Boosting Resource Utilization and Throughput via Attention Disaggregation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20552)\n- [EuroSys'25] [NeuStream: Bridging Deep Learning Serving and Stream Processing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3717489)\n- [SoCC'25] [ModServe: Scalable and Resource-Efficient Large Multimodal Model Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3772052.3772254)\n- [arxiv'25] [PipeBoost: Resilient Pipelined Architecture for Fast Serverless LLM Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17707)\n- [ISCA'25] [Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18599)\n- [arxiv'25] [Jenga: Effective Memory Management for Serving LLM with Heterogeneity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18292)\n- [arxiv'25] [AccelGen: Heterogeneous SLO-Guaranteed High-Throughput LLM Inference Serving for Diverse Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13737)\n- [FAST'25] [Mooncake: Trading More Storage for Less Computation — A KVCache-centric Architecture for Serving LLM Chatbot](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Ffast25\u002Fpresentation\u002Fqin)\n- [arxiv'25] [Collaborative Speculative Inference for Efficient LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10325)\n- [NSDI'25] [SuperServe: Fine-Grained Inference Serving for Unpredictable Workloads](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fkhare)\n- [arxiv'25] [Seesaw: High-throughput LLM Inference via Model Re-sharding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06433)\n- [arxiv'25] [SpecServe: Efficient and SLO-Aware Large Language Model Serving with Adaptive Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05096)\n- [arxiv'25] [ADOR: A Design Exploration Framework for LLM Serving with Enhanced Latency and Throughput](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04253)\n- [arxiv'25] [Long-Context Inference with Retrieval-Augmented Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20330)\n- [WWW'25] [External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17494)\n- [arxiv'25] [Make LLM Inference Affordable to Everyone: Augmenting GPU Memory with NDP-DIMM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16963)\n- [arxiv'25] [KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.16002)\n- [arxiv'25] [Serving Models, Fast and Slow:Optimizing Heterogeneous LLM Inferencing Workloads at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14617)\n- [arxiv'25] [LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14866)\n- [arxiv'25] [HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12574)\n- [arxiv'25] [Autellix: An Efficient Serving Engine for LLM Agents as General Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13965)\n- [MLSys'25] [ThunderServe: High-performance and Cost-efficient LLM Serving in Cloud Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09334)\n- [ICLR'25] [HexGen-2: Disaggregated Generative Inference of LLMs in Heterogeneous Environment](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.07903)\n- [arxiv'25] [Memory Offloading for Large Language Model Inference with Latency SLO Guarantees](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.08182)\n- [EuroSys'25] [SkyServe: Serving AI Models across Regions and Clouds with Spot Instances](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01438)\n- [ASPLOS'25] [Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707215)\n- [ASPLOS'25] [Dilu: Enabling GPU Resourcing-on-Demand for Serverless DL Serving via Introspective Elasticity](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707251)\n- [arxiv'25] [MPIC: Position-Independent Multimodal Context Caching System for Efficient MLLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01960)\n- [arxiv'25] [Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00722)\n- [arxiv'25] [Towards Efficient Large Multimodal Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00937)\n- [arxiv'25] [HeteroLLM: Accelerating Large Language Model Inference on Mobile SoCs platform with Heterogeneous AI Accelerators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14794)\n- [arxiv'25] [HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14808)\n- [arxiv'25] [Locality-aware Fair Scheduling in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14312)\n- [arxiv'25] [DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05215)\n- [arxiv'25] [DeepFlow: Serverless Large Language Model Serving at Scale](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2501.14417)\n- [arxiv'25] [iServe: An Intent-based Serving System for LLMs](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2501.13111)\n- [arxiv'25] [AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12162v1)\n- [arxiv'25] [EchoLM: Accelerating LLM Serving with Real-time Knowledge Distillation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12689)\n- [arxiv'25] [OMEGA: A Low-Latency GNN Serving System for Large Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08547)\n- [arxiv'25] [PRESERVE: Prefetching Model Weights and KV-Cache in Distributed LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08192)\n- [arxiv'25] [Hierarchical Autoscaling for Large Language Model Serving with Chiron](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08090)\n- [arxiv'25] [Mell: Memory-Efficient Large Language Model Serving via Multi-GPU KV Cache Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06709)\n- [arxiv'25] [Accelerated Diffusion Models via Speculative Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05370)\n- [MLSys'25] [FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving](https:\u002F\u002Fopenreview.net\u002Fforum?id=RXPofAsL8F)\n- [EuroSys'25] [A House United Within Itself: SLO-Awareness for On-Premises Containerized ML Inference Clusters via Faro](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19488)\n- [arxiv'24] [LLM Inference Unveiled: Survey and Roofline Model Insights](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16363)\n- [arxiv'24] [Efficiently Serving LLM Reasoning Programs with Certaindex](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20993)\n- [arxiv'24] [LoL-PIM: Long-Context LLM Decoding with Scalable DRAM-PIM System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20166)\n- [arxiv'24] [TimelyLLM: Segmented LLM Serving System for Time-sensitive Robotic Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18695)\n- [arxiv'24] [Dovetail: A CPU\u002FGPU Heterogeneous Speculative Decoding for LLM inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18934)\n- [arxiv'24] [KunServe: Elastic and Efficient Large Language Model Serving with Parameter-centric Memory Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18169)\n- [arxiv'24] [Tackling the Dynamicity in a Production LLM Serving System with SOTA Optimizations via Hybrid Prefill\u002FDecode\u002FVerify Scheduling on Efficient Meta-kernels](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18106)\n- [arxiv'24] [SYMPHONY: Improving Memory Management for LLM Inference Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16434)\n- [arxiv'24] [A System for Microserving of LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12488)\n- [arxiv'24] [HashAttention: Semantic Sparsity for Faster Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14468)\n- [arxiv'24] [SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02532)\n- [arxiv'24] [Unifying KV Cache Compression for Large Language Models with LeanKV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03131)\n- [arxiv'24] [PREBA: A Hardware\u002FSoftware Co-Design for Multi-Instance GPU based AI Inference Servers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19114)\n- [Survey :mag:] [ACM CSUR'24] [Resource-efficient Algorithms and Systems of Foundation Models: A Survey](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3706418)\n- [arxiv'24] [BlendServe: Optimizing Offline Inference for Auto-regressive Large Models with Resource-aware Batching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16102)\n- [ICML'25] [SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10958) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention)]\n- [ICLR'25] [SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02367) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention)]\n- [ICML'25] [SpargeAttention: Accurate and Training-free Sparse Attention Accelerating Any Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18137) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn)]\n- [arxiv'24] [Optimizing Speculative Decoding for Serving Large Language Models Using Goodput](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14066)\n- [ACL'24] [LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16710)\n- [ACL'24] [SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15030)\n- [arxiv'24] [EcoServe: Maximizing Multi-Resource Utilization with SLO Guarantees in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06364)\n- [IPDPS'24] [Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08383)\n- [arxiv'24] [EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12247)\n- [NeurIPS'24] [Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting](https:\u002F\u002Fopenreview.net\u002Fforum?id=lT3oc04mDp)\n- [NeurIPS'24] [Toward Efficient Inference for Mixture of Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=stXtBqyTWX)\n- [NeurIPS'24] [Sequoia: Scalable and Robust Speculative Decoding](https:\u002F\u002Fopenreview.net\u002Fforum?id=rk2L9YGDi2)\n- [arxiv'24] [Lynx: Enabling Efficient MoE Inference through Dynamic Batch-Aware Expert Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08982)\n- [SC'24] [PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11798)\n- [SC'24] [SMIless: Serving DAG-based Inference with Dynamic Invocations under Serverless Computing](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100a590\u002F21HUVxvcnoA)\n- [arxiv'24] [SuffixDecoding: A Model-Free Approach to Speeding Up Large Language Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04975)\n- [arxiv'24] [V-LoRA: An Efficient and Flexible System Boosts Vision Applications with LoRA LMM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00915)\n- [SenSys'24] [LiteMoE: Customizing On-device LLM Serving via Proxy Submodel Tuning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3666025.3699355)\n- [arxiv'24] [HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01433)\n- [arxiv'24] [NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01142)\n- [MICRO'24] [Pushing the Performance Envelope of DNN-based Recommendation Systems Inference on GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22249)\n- [arxiv'24] [VL-Cache: Sparsity and Modality-Aware KV Cache Compression for Vision-Language Model Inference Acceleration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.23317)\n- [arxiv'24] [ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21465)\n- [arxiv'24] [Is the GPU Half-Empty or Half-Full? Practical Scheduling Techniques for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17840)\n- [arxiv'24] [POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18038)\n- [PML4LRS @ ICLR2024] [Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07033)\n- [arxiv'24] [MagicPIG: LSH Sampling for Efficient LLM Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16179)\n- [arxiv'24] [Revisiting SLO and Goodput Metrics in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14257)\n- [arxiv'24] [EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15332v1)\n- [arxiv'24] [ParallelSpec: Parallel Drafter for Efficient Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05589)\n- [EuroSys'25] [Fast State Restoration in LLM Serving with HCache](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05004)\n- [arxiv'24] [SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03960)\n- [arxiv'24] [vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04437)\n- [arxiv'24] [Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17264)\n- [arxiv'24] [CSPS: A Communication-Efficient Sequence-Parallelism based Serving System for Transformer based Models with Long Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15104)\n- [arxiv'24] [DynamoLLM: Designing LLM Inference Clusters for Performance and Energy Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00741)\n- [HPCA'24] [KRISP: Enabling Kernel-wise RIght-sizing for Spatial Partitioned GPU Inference Servers](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10071121)\n- [arxiv'24] [Missile: Fine-Grained, Hardware-Level GPU Resource Isolation for Multi-Tenant DNN Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13996)\n- [NeurIPS'24] [Efficient LLM Scheduling by Learning to Rank](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.15792)\n- [arxiv'24] [P\u002FD-Serve: Serving Disaggregated Large Language Model at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08147)\n- [arxiv'24] [MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.11743)\n- [SOSP'24] [PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695964)\n- [SOSP'24] [LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695948)\n- [SOSP'24] [Improving DNN Inference Throughput Using Practical, Per-Input Compute Adaptation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695978)\n- [SOSP'24] [Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05385)\n- [arxiv'24] [LLMServingSim: A HW\u002FSW Co-Simulation Infrastructure for LLM Inference Serving at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.05499v1)\n- [ICPP'24] [GMM: An Efficient GPU Memory Management-based Model Serving System for Multiple DNN Inference Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673122)\n- [SIGCOMM'24] [CacheGen: KV Cache Compression and Streaming for Fast Large Language Model Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672274)\n- [ES-FoMO @ ICML'24] [CO2: Precise Attention Score Observation for improving KV Cache Replacement in Large Language Models](https:\u002F\u002Fopenreview.net\u002Fpdf?id=02zPmtcZa0)\n- [OSDI'24] [dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fwu-bingyang)\n- [OSDI'24] [Parrot: Efficient Serving of LLM-based Applications with Semantic Variable](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Flin-chaofan)\n- [OSDI'24] [USHER: Holistic Interference Avoidance for Resource Optimized ML Inference](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fshubha)\n- [OSDI'24] [Fairness in Serving Large Language Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fsheng)\n- [OSDI'24] [MonoNN: Enabling a New Monolithic Optimization Space for Neural Network Inference Tasks on Modern GPU-Centric Architectures](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fzhuang)\n- [OSDI'24] [Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fagrawal)\n- [OSDI'24] [ServerlessLLM: Low-Latency Serverless Inference for Large Language Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Ffu)\n- [OSDI'24] [InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Flee)\n- [OSDI'24] [Llumnix: Dynamic Scheduling for Large Language Model Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fsun-biao)\n- [OSDI'24] [DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fzhong-yinmin)\n- [ATC'24] [Power-aware Deep Learning Model Serving with μ-Serve](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fqiu)\n- [ATC'24] [Fast Inference for Probabilistic Graphical Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fjiang)\n- [ATC'24] [Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fgao-bin-cost)\n- [ATC'24] [PUZZLE: Efficiently Aligning Large Language Models through Light-Weight Context Switch](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Flei)\n- [ATC'24] [Quant-LLM: Accelerating the Serving of Large Language Models via FP6-Centric Algorithm-System Co-Design on Modern GPUs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fxia)\n- [TPDS'24] [ElasticBatch: A Learning-Augmented Elastic Scheduling System for Batch Inference on MIG](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10605084)\n- [Survey :mag:] [arxiv'24] [LLM Inference Serving: Survey of Recent Advances and Opportunities](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12391)\n- [arxiv'24] [Metron: Holistic Performance Evaluation Framework for LLM Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07000)\n- [arxiv'24] [Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2407.00066)\n- [arxiv'24] [One Queue Is All You Need: Resolving Head-of-Line Blocking in Large Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00047)\n- [OSDI'24] [Parrot: Efficient Serving of LLM-based Applications with Semantic Variable](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.19888)\n- [arxiv'24] [MemServe: Context Caching for Disaggregated LLM Serving with Elastic Memory Pool](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17565)\n- [ISCA'24] [ElasticRec: A Microservice-based Model Serving Architecture Enabling Elastic Resource Scaling for Recommendation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06955v1)\n- [ISCA'24] [Splitwise: Efficient generative LLM inference using phase splitting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18677)\n- [ICML'24] [Break the Sequential Dependency of LLM Inference Using Lookahead Decoding](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Ffu24a.html)\n- [ICML'24] [Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads](https:\u002F\u002Fopenreview.net\u002Fforum?id=PEpbUobfJv)\n- [ICML'24] [HexGen: Generative Inference of Large Language Model over Heterogeneous Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11514)\n- [ICML'24] [EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty](https:\u002F\u002Fopenreview.net\u002Fforum?id=1NdN7eXyb4)\n- [ICML'24] [MuxServe: Flexible Spatial-Temporal Multiplexing for Multiple LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02015)\n- [HPCA'24] [An LPDDR-based CXL-PNM Platform for TCO-efficient Inference of Transformer-based Large Language Models](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10476443)\n- [MobiSys'24] [ARISE: High-Capacity AR Offloading Inference Serving via Proactive Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3643832.3661894)\n- [MobiSys'24] [Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3643832.3661878)\n- [arxiv'24] [Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18628)\n- [arxiv'24] [HawkVision: Low-Latency Modeless Edge AI Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.19213)\n- [MLSys'24] [HeteGen: Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F5431dca75a8d2abc1fb51e89e8324f10-Paper-Conference.pdf)\n- [MLSys'24] [S-LoRA: Serving Thousands of Concurrent LoRA Adapters](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F906419cd502575b617cc489a1a696a67-Paper-Conference.pdf)\n- [MLSys'24] [Vidur: A Large-Scale Simulation Framework For LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05465)\n- [arxiv'24] [The CAP Principle for LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11299)\n- [WWW'24] [λGrapher: A Resource-Efficient Serverless System for GNN Serving through Graph Sharing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3589334.3645383)\n- [ICML'24] [CLLMs: Consistency Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00835)\n- [arxiv'24] [BlockLLM: Multi-tenant Finer-grained Serving for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18322)\n- [EuroSys'24] [Model Selection for Latency-Critical Inference Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3629565)\n- [arxiv'24] [Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14527)\n- [arxiv'24] [Learn To be Efficient: Build Structured Sparsity in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06126)\n- [arxiv'24] [Sponge: Inference Serving with Dynamic SLOs Using In-Place Vertical Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00704v1)\n- [ISCA'24] [Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12066)\n- [arxiv'24] [Minions: Accelerating Large Language Model Inference with Adaptive and Collective Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15678)\n- [arxiv'24] [ALTO: An Efficient Network Orchestrator for Compound AI Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04311)\n- [ASPLOS'24] [ExeGPT: Constraint-Aware Resource Scheduling for LLM Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640383)\n- [ASPLOS'24] [NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00579)\n- [arxiv'24] [ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2403.02352)\n- [arxiv'24] [Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02310)\n- [ICML'24] [DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01876)\n- [ICLR'24] [Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01801)\n- [arxiv'24] [FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18789)\n- [arxiv'24] [Wisdom of Committee: Distilling from Foundation Model to SpecializedApplication Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14035)\n- [arxiv'24] [RelayAttention for Efficient Large Language Model Serving with Long System Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14808)\n- [arxiv'24] [LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01136)\n  - [PPoPP'24 poster] [POSTER: LLM-PQ:Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627535.3638480)\n- [NSDI'24] Approximate Caching for Efficiently Serving Diffusion Models\n- [arxiv'24] [APIServe: Efficient API Support for Large-Language Model Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01869)\n- [arxiv'24] [ServerlessLLM: Locality-Enhanced Serverless Inference for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14351)\n- [arxiv'24] [MoE-Infinity: Activation-Aware Expert Offloading for Efficient MoE Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14361)\n- [arxiv'24] [FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14112)\n- [arxiv'24] [Accelerating Retrieval-Augmented Language Model Serving with Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14021)\n- [arxiv'24] [CaraServe: CPU-Assisted and Rank-Aware LoRA Serving for Generative LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11240)\n- [arxiv'24] [Inference without Interference: Disaggregate LLM Inference for Mixed Downstream Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11181)\n- [arxiv'24] [DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08671)\n- [Survey :mag:] [arxiv'24] [Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.07851)\n- [arxiv'24] [Learned Best-Effort LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.07886)\n- [arxiv'24] [Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02669)\n- [VLDB'24] [Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol17\u002Fp211-xia.pdf)\n- [ASPLOS'24] [SpotServe: Serving Generative Large Language Models on Preemptible Instances](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15566)\n- [ASPLOS'24] [SpecInfer: Accelerating Generative Large Language Model Serving with Speculative Inference and Token Tree Verification](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651335)\n- [arxiv'23] [DeltaZip: Multi-Tenant Language Model Serving via Delta Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05215)\n- [EMNLP'23] [Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding](https:\u002F\u002Faclanthology.org\u002F2023.emnlp-main.362\u002F)\n- [arxiv'23] [Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.08168)\n- [arxiv'23] [Fairness in Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00588)\n- [arxiv'23] [Moirai: Towards Optimal Placement for Distributed Inference on Heterogeneous Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04025)\n- [arxiv'23] [Punica: Multi-Tenant LoRA Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18547)\n- [arxiv'23] [Pipeline Parallelism for DNN Inference with Practical Performance Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03703)\n- [arxiv'23] [SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.16369)\n- [arxiv'23] High-throughput Generative Inference of Large Language Models with a Single GPU\n- [NeurIPS'23] [SpecTr: Fast Speculative Decoding via Optimal Transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=SdYHLTCC5J)\n- [HPDC'23] Kairos: Building Cost-Efficient Machine Learning Inference Systems with Heterogeneous Cloud Resources\n- [SOSP'23] Paella: Low-latency Model Serving with Virtualized GPU Scheduling\n- [SOSP'23] [Efficient Memory Management for Large Language Model Serving with PagedAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06180)\n- [MLSys'23] [Efficiently Scaling Transformer Inference](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F523f87e9d08e6071a3bbd150e6da40fb-Abstract-mlsys2023.html)\n- [EuroSys'23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access\n- [EuroSys'23] Tabi: An Efficient Multi-Level Inference System for Large Language Models\n- [EuroSys'23] Pocket: ML Serving from the Edge\n- [OSDI'23] AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving\n- [NSDI'23] SHEPHERD: Serving DNNs in the Wild\n- [VLDB'23] Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures\n- [ICML'23] [Fast Inference from Transformers via Speculative Decoding](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fleviathan23a.html)\n- [SIGMOD'22] Serverless Data Science - Are We There Yet? A Case Study of Model Serving\n- [OSDI'22] Orca: A Distributed Serving System for Transformer-Based Generative Models\n- [OSDI'22] Microsecond-scale Preemption for Concurrent GPU-accelerated DNN Inferences\n- [ATC'22] [SOTER: Guarding Black-box Inference for General Neural Networks at the Edge](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc22\u002Fpresentation\u002Fshen)\n- [ATC'22] Serving Heterogeneous Machine Learning Models on Multi-GPU Servers with Spatio-Temporal Sharing\n- [ATC'22] Tetris: Memory-efficient Serverless Inference through Tensor Sharing\n- [ATC'22] PetS: A Unified Framework for Parameter-Efficient Transformers Serving\n- [ATC'21] INFaaS: Automated Model-less Inference Serving\n- [SoCC'21] Morphling: Fast, Near-Optimal Auto-Configuration for Cloud-Native Model Serving\n- [arxiv'21] Supporting Massive DLRM Inference through Software Defined Memory\n- [MobiCom'20] SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud\n\n## Attention Optimization\n- [PPOPP'26] [MetaAttention: A Unified and Performant Attention Framework across Hardware Backends](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786444)\n- [PPoPP'26] [FlashAttention-T: Towards Fully Tensorized Attention by Exploiting Tensor-Vector Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786425)\n- [arxiv'25] [BLASST: Dynamic BLocked Attention Sparsity via Softmax Thresholding](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2512.12087)\n- [SC'25] [UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759894)\n- [SC'25] [RingX: Scalable Parallel Attention for Long-Context Learning on HPC](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759859)\n- [NeurIPS'25] [Twilight: Adaptive Attention Sparsity with Hierarchical Top-p Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02770)\n- [NeurIPS'25 Spotlight] [SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention)]\n- [arxiv'25] [SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse-Linear Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24006) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSLA)]\n- [MLSys'25] [FastTree: Optimizing Attention Kernel and Runtime for Tree-Structured LLM Inference](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3278)\n- [MLSys'25] [FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving](https:\u002F\u002Fopenreview.net\u002Fforum?id=RXPofAsL8F)\n- [NeurIPS'24] [FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F7ede97c3e082c6df10a8d6103a2eebd2-Abstract-Conference.html)\n- [ICLR'24] [FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning](https:\u002F\u002Fopenreview.net\u002Fforum?id=mZn2Xyh9Ec)\n- [NeurIPS'22] [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html)\n\n## Mixture of Experts (MoE)\n- [arxiv'26] [PROBE: Co-Balancing Computation and Communication in MoE Inference via Real-Time Predictive Prefetching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.00509)\n- [arxiv'26] [Dynamic Expert Sharing: Decoupling Memory from Parallelism in Mixture-of-Experts Diffusion LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.00879)\n- [arxiv'26] [LatentMoE: Toward Optimal Accuracy per FLOP and Parameter in Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.18089)\n- [arxiv'26] [Least-Loaded Expert Parallelism: Load Balancing An Imbalanced Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17111)\n- [arxiv'26] [MixServe: An Automatic Distributed Serving System for MoE Models with Hybrid Parallelism Based on Fused Communication Algorithm](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.08800)\n- [arxiv'26] [MoE-DisCo:Low Economy Cost Training Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06857)\n- [arxiv'26] [MoEBlaze: Breaking the Memory Wall for Efficient MoE Training on Modern GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05296)\n- [arxiv'26] [Making MoE-based LLM Inference Resilient with Tarragon](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01310)\n- [EuroSys'26] Taming Latency-Memory Trade-Off in MoE-Based LLM Serving via Fine-Grained Expert Offloading\n- [EuroSys'26] [MegaScale-MoE: Large-Scale Communication-Efficient Training of Mixture-of-Experts Models in Production](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11432)\n\n- [PACT'25] [ScaleMoE: A Fast and Scalable Distributed Training Framework for Large-Scale Mixture-of-Experts Models](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F11282919)\n- [arxiv'25] [FUSCO: High-Performance Distributed Data Shuffling via Transformation-Communication Fusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22036)\n- [arxiv'25] [Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21487)\n- [arxiv'25] [UCCL-EP: Portable Expert-Parallel Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19849)\n- [arxiv'25] [Remoe: Towards Efficient and Low-Cost MoE Inference in Serverless Computing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18674)\n- [arxiv'25] [Efficient Mixture-of-Agents Serving via Tree-Structured Routing, Adaptive Pruning, and Dependency-Aware Prefill-Decode Overlap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18126)\n- [arxiv'25] [SonicMoE: Accelerating MoE with IO and Tile-aware Optimizations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14080)\n- [arxiv'25] [Janus: Disaggregating Attention and Experts for Scalable MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13525)\n- [arxiv'25] [Efficient MoE Serving in the Memory-Bound Regime: Balance Activated Experts, Not Tokens](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.09277)\n- [arxiv'25] [Context-Aware Mixture-of-Experts Inference on CXL-Enabled GPU-NDP Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04476)\n- [arxiv'25] [MicroMoE: Fine-Grained Load Balancing for Mixture-of-Experts with Token Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16947)\n- [arxiv'25] [MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15690)\n- [arxiv'25] [MoE-SpeQ: Speculative Quantized Decoding with Proactive Expert Prefetching and Offloading for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14102)\n- [arxiv'25] [Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10676)\n- [arxiv'25] [FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11505)\n- [arxiv'25] [DualSparse-MoE: Coordinating Tensor\u002FNeuron-Level Sparsity with Expert Partition and Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18376)\n- [arxiv'25] [BuddyMoE: Exploiting Expert Redundancy to Accelerate Memory-Constrained Mixture-of-Experts Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10054)\n- [SC'25] [Diff-MoE: Efficient Batched MoE Inference with Priority-Driven Differential Expert Caching](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759903)\n- [arxiv'25] [PuzzleMoE: Efficient Compression of Large Mixture-of-Experts Models via Sparse Expert Merging and Bit-packed inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.04805)\n- [SC workshop'25] [Compression Error Sensitivity Analysis for Different Experts in MoE Model Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767377)\n- [SC workshop'25] [Batch Tiling on Attention: Efficient Mixture of Experts Training on Wafer-Scale Processors](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767407)\n- [arxiv'25] [Opportunistic Expert Activation: Batch-Aware Expert Routing for Faster Decode Without Retraining](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02237)\n- [arxiv'25] [HybridEP: Scaling Expert Parallelism to Cross-Datacenter Scenario via Hybrid Expert\u002FData Transmission](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19470)\n- [arxiv'25] [MoE-Prism: Disentangling Monolithic Experts for Elastic MoE Services via Model-System Co-Designs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19366)\n- [arxiv'25] [ReXMoE: Reusing Experts with Minimal Overhead in Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.17483)\n- [arxiv'25] [MergeMoE: Efficient Compression of MoE Models via Expert Output Merging](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14436)\n- [MICRO'25] [Optimizing All-to-All Collective Communication with Fault Tolerance on Torus Networks](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756057)\n- [arxiv'25] [GatePro: Parameter-Free Expert Selection Optimization for Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.13079)\n- [MICRO'25] [Stratum: System-Hardware Co-Design with Tiered Monolithic 3D-Stackable DRAM for Efficient MoE Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05245)\n- [arxiv'25] [Orders in Chaos: Enhancing Large-Scale MoE LLM Serving with Data Movement Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05497)\n- [arxiv'25] [ElasticMoE: An Efficient Auto Scaling Method for Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02613)\n- [SOSP'25] [KTransformers: Unleashing the Full Potential of CPU\u002FGPU Hybrid Inference for MoE Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764843)\n- [arxiv'25] [GRACE-MoE: Grouping and Replication with Locality-Aware Routing for Efficient Distributed MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25041)\n- [arxiv'25] [MoEs Are Stronger than You Think: Hyper-Parallel Inference Scaling with RoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17238)\n- [arxiv'25] [DiEP: Adaptive Mixture-of-Experts Compression through Differentiable Expert Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.16105)\n- [arxiv'25] [Symphony-MoE: Harmonizing Disparate Pre-trained Models into a Coherent Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18542)\n- [NeurIPS'25] [BrainMoE: Cognition Joint Embedding via Mixture-of-Expert Towards Robust Brain Foundation Model](https:\u002F\u002Fopenreview.net\u002Fpdf?id=05cVmYJJnb)\n- [NeurIPS'25] [S’MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=LbNL8xGai2)\n- [NeurIPS'25] [The Omni-Expert: A Computationally Efficient Approach to Achieve a Mixture of Experts in a Single Expert Model](https:\u002F\u002Fopenreview.net\u002Fpdf?id=mVRphqQKnb)\n- [NeurIPS'25] [MoESD: Unveil Speculative Decoding's Potential for Accelerating Sparse MoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19645)\n- [NeurIPS'25] [FlyLoRA: Boosting Task Decoupling and Parameter Efficiency via Implicit Rank-Wise Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08396)\n- [NeurIPS'25] [FlowMoE: A Scalable Pipeline Scheduling Framework for Distributed Mixture-of-Experts Training](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2025\u002Fposter\u002F118234)\n- [NeurIPS'25] [FlashMoE: Fast Distributed MoE in a Single Kernel](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04667) [[Code](https:\u002F\u002Fgithub.com\u002Fosayamenja\u002FFlashMoE)]\n- [arxiv'25] [Steering MoE LLMs via Expert (De)Activation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.09660)\n- [arxiv'25] [HD-MoE: Hybrid and Dynamic Parallelism for Mixture-of-Expert LLMs with 3D Near-Memory Processing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.09420)\n- [arxiv'25] [LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02753)\n- [SC'25] [MoE-Compression: How the Compression Error of Experts Affects the Inference Accuracy of MoE Model?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07727)\n- [arxiv'25] [LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02753)\n- [arxiv'25] [LongCat-Flash Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.01322)\n- [arxiv'25] [Accelerating Mixture-of-Experts Inference by Hiding Offloading Latency with Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.21706)\n- [arxiv'25] [HAP: Hybrid Adaptive Parallelism for Efficient Mixture-of-Experts Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19373)\n- [arxiv'25] [MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17467)\n- [SIGCOMM'25] [MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02263)\n- [ICLR'25] [Ada-K Routing: Boosting the Efficiency of MoE-based LLMs](https:\u002F\u002Fopenreview.net\u002Fforum?id=9CqkpQExe2)\n- [arxiv'25] [Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05567)\n- [ICML'25] [Oracle-MoE: Locality-preserving Routing in the Oracle Space for Memory-constrained Large Language Model Inference](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2025\u002Fposter\u002F43606)\n- [ICML'25] [I2MoE: Interpretable Multimodal Interaction-aware Mixture-of-Experts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=EuJaF5QsMP)\n- [arxiv'25] [Chain-of-Experts: Unlocking the Communication Power of Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18945)\n- [SC'25] [X-MoE: Enabling Scalable Training for Emerging Mixture-of-Experts Architectures on HPC Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.13337)\n- [SIGCOMM'25] [MixNet: A Runtime Reconfigurable Optical-Electrical Fabric for Distributed Mixture-of-Experts Training](https:\u002F\u002Fxcwanandy.github.io\u002Fpapers\u002F2025\u002Fmixnet-sigcomm25.pdf)\n- [ATC'25] [PopFetcher: Towards Accelerated Mixture-of-Experts Training Via Popularity Based Expert-Wise Prefetch](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fzhang-junyi)\n- [arxiv'25] [HierMoE: Accelerating MoE Training with Hierarchical Token Deduplication and Expert Swap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09591)\n- [arxiv'25] [PiKV: KV Cache Management System for Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06526)\n- [arxiv'25] [BrownoutServe: SLO-Aware Inference Serving under Bursty Workloads for MoE-based LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.17133)\n- [arxiv'25] [Towards Greater Leverage: Scaling Laws for Efficient Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.17702)\n- [ACL'25] [EAC-MoE: Expert-Selection Aware Compressor for Mixture-of-Experts Large Language Models](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.633.pdf)\n- [ACL'25] [FOLDMOE: Efficient Long Sequence MoE Training via Attention-MoE Pipelining](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.186.pdf)\n- [arxiv'25] [The New LLM Bottleneck: A Systems Perspective on Latent Attention and Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.15465)\n- [arxiv'25] [Muon is Scalable for LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16982v1)\n- [arxiv'25] [Long-Tailed Distribution-Aware Router For Mixture-of-Experts in Large Vision-Language Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01351)\n- [arxiv'25] [Sub-MoE: Efficient Mixture-of-Expert LLMs Compression via Subspace Expert Merging](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23266)\n- [arxiv'25] [HarMoEny: Efficient Multi-GPU Inference of MoE Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12417)\n- [arxiv'25] [Load Balancing Mixture of Experts with Similarity Preserving Routers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14038)\n- [arxiv'25] [MoE-GPS: Guidlines for Prediction Strategy for Dynamic Expert Duplication in MoE Load Balancing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07366)\n- [arxiv'25] [EvoMoE: Expert Evolution in Mixture of Experts for Multimodal Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23830)\n- [arxiv'25] [CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17553)\n- [arxiv'25] [PreMoe: Lightening MoEs on Constrained Memory by Expert Pruning and Retrieval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17639)\n- [arxiv'25] [Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16056)\n- [ATC'25] [PopFetcher: Towards Accelerated Mixture-of-Experts Training Via Popularity Based Expert-Wise Prefetch](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fzhang-junyi)\n- [arxiv'25] [Toward Cost-Efficient Serving of Mixture-of-Experts with Asynchrony](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.08944)\n- [ICML'25] [FloE: On-the-Fly MoE Inference on Memory-constrained GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05950)\n- [arxiv'25] [PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09519)\n- [arxiv'25] [MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03804)\n- [arxiv'25] [Faster MoE LLM Inference for Extremely Large Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03531)\n- [arxiv'25] [Accelerating Mixture-of-Experts Training with Adaptive Expert Replication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19925)\n- [NAACL'25] [MoLA: MoE LoRA with Layer-wise Expert Allocation](https:\u002F\u002Faclanthology.org\u002F2025.findings-naacl.284\u002F)\n- [NAACL'25] [Marrying LLMs with Dynamic Forecasting: A Graph Mixture-of-expert Perspective](https:\u002F\u002Faclanthology.org\u002F2025.findings-naacl.24.pdf)\n- [NAACL'25] [Sparser Mixture-of-Adapters with Cross-Layer Generalization](https:\u002F\u002Faclanthology.org\u002F2025.naacl-long.201\u002F)\n- [NAACL'25] [SimSMoE: Toward Efficient Training Mixture of Experts via Solving Representational Collapse](https:\u002F\u002Faclanthology.org\u002F2025.findings-naacl.107\u002F)\n- [Mobicom'25] [D2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15299)\n- [arxiv'25] [MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2504.14960)\n- [arxiv'25] [MoE-Gen: High-Throughput MoE Inference on a Single GPU with Module-Based Batching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09716)\n- [arxiv'25] [Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12370)\n- [arxiv'25] [Unveiling Hidden Collaboration within Mixture-of-Experts in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12359)\n- [arxiv'25] [Dense Backpropagation Improves Training for Sparse Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12463)\n- [arxiv'25] [MoE-Lens: Towards the Hardware Limit of High-Throughput MoE LLM Serving Under Resource Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09345)\n- [arxiv'25] [C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07964)\n- [arxiv'25] [Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07807)\n- [arxiv'25] [S'MoRE: Structural Mixture of Residual Experts for LLM Fine-tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06426)\n- [DAC'25] [HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05897)\n- [arxiv'25] [Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05586)\n- [arxiv'25] [HeterMoE: Efficient Training of Mixture-of-Experts Models on Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03871)\n- [TKDE'25] [A Survey on Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06204)\n- [ICLR'25] [NetMoE: Accelerating MoE Training through Dynamic Sample Placement](https:\u002F\u002Fopenreview.net\u002Fforum?id=1qP3lsatCR)\n- [arxiv'25] [ProMoE: Fast MoE-based LLM Serving using Proactive Caching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22134)\n- [arxiv'25] [Mixture of Lookup Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15798v1)\n- [EuroSys'25] [Samoyeds: Accelerating MoE Models with Structured Sparsity Leveraging Sparse Tensor Cores](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10725)\n- [EuroMLSys'25] [Priority-Aware Preemptive Scheduling for Mixed-Priority Workloads in MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09304)\n- [EuroMLSys'25] [Accelerating MoE Model Inference with Expert Sharding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08467)\n- [arxiv'25] [eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06823)\n- [KDD'25] [ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06881)\n- [arxiv'25] [Continual Pre-training of MoEs: How robust is your router?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05029)\n- [arxiv'25] [Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05139)\n- [arxiv'25] [Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05066)\n- [arxiv'25] [Speculative MoE: Communication Efficient Parallel MoE Inference with Speculative Token and Expert Pre-scheduling](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2503.04398)\n- [MLSys'25] [Comet: Fine-grained Computation-communication Overlapping for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19811v3)\n- [arxiv'25] [CoSMoEs: Compact Sparse Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00245)\n- [CVPR'25] [DeRS: Towards Extremely Efficient Upcycled Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01359)\n- [ASPLOS'25] [CoServe: Efficient Collaboration-of-Experts (CoE) Model Inference with Limited Memory](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2503.02354)\n- [arxiv'25] [Comet: Fine-grained Computation-communication Overlapping for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19811)\n- [arxiv'25] [BigMac: A Communication-Efficient Mixture-of-Experts Model Structure for Fast Training and Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16927)\n- [arxiv'25] [DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12455)\n- [arxiv'25] [Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.12947)\n- [arxiv'25] [MoETuner: Optimized Mixture of Expert Serving with Balanced Expert Placement and Token Routing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06643)\n- [arxiv'25] [Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05172)\n- [arxiv'25] [Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06094)\n- [arxiv'25] [fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.05370)\n- [TPDS'25] [EfficientMoE: Optimizing Mixture-of-Experts Model Training with Adaptive Load Balance](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftd\u002F5555\u002F01\u002F10876795\u002F247s0GLFJN6)\n- [arxiv'25] [Hecate: Unlocking Efficient Sparse Model Training via Fully Sharded Sparse Data Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02581)\n- [NAACL'25] [MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00997)\n- [arxiv'25] [BTS: Harmonizing Specialized Experts into a Generalist LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00075)\n- [ASPLOS'25] [FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10714)\n- [arxiv'25] [Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11873)\n- [arxiv'25] [Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12370)\n- [arxiv'25] [Optimizing Distributed Deployment of Mixture-of-Experts Model Inference in Serverless Computing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05313)\n- [MICRO'24] [SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10764648)\n- [TPDS'24] [MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10494556)\n  - Journal version of [IPDPS'23] [MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177396)\n- [arxiv'24] [DeepSeek-V3 Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19437)\n- [arxiv'24] [HEXA-MoE: Efficient and Heterogeneous-aware MoE Acceleration with ZERO Computation Redundancy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01288)\n- [arxiv'24] [Communication-Efficient Sparsely-Activated Model Training via Sequence Migration and Token Condensation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15419)\n- [arxiv'24] [Nexus: Specialization meets Adaptability for Efficiently Training Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15901)\n- [arxiv'24] [ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing](https:\u002F\u002Fopenreview.net\u002Fforum?id=4D0f16Vwc3)\n- [Survey :mag:] [arxiv'24] [A Survey on Inference Optimization Techniques for Mixture of Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14219)\n- [arxiv'24] [DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10302)\n- [arxiv'24] [Llama 3 Meets MoE: Efficient Upcycling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09952)\n- [arxiv'24] [Sparsing Law: Towards Large Language Models with Greater Activation Sparsity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02335)\n- [arxiv'24] [Mixture of A Million Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04153)\n- [arxiv'24] [MoE-CAP: Cost-Accuracy-Performance Benchmarking for Mixture-of-Experts Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07067)\n- [arxiv'24] [MoESys: A Distributed and Efficient Mixture-of-Experts Training and Inference System for Internet Services](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10034)\n- [arxiv'24] [Toward Inference-optimal Mixture-of-Expert Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02852)\n- [arxiv'24] [Expert-Token Resonance: Redefining MoE Routing through Affinity-Driven Active Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00023)\n- [MLArchSys'24 @ ISCA'24] [MoE-ERAS: Expert Residency Aware Selection](https:\u002F\u002Fopenreview.net\u002Fforum?id=o43eHjPEMO)\n- [arxiv'24] [MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04801)\n- [arxiv'24] [Prediction Is All MoE Needs: Expert Load Distribution Goes from Fluctuating to Stabilizing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16914)\n- [arxiv'24] [DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04434)\n- [COLM'24] [Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training](https:\u002F\u002Fopenreview.net\u002Fforum?id=LKEJPySnlt)\n- [ME-FoMo @ ICLR'24] [Scaling Laws for Fine-Grained Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07871)\n- [arxiv'24] [UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.18797)\n- [ML for Sys workshop @ NeurIPS'24] [IFMoE: An Inference Framework Design for Fine-grained MoE](https:\u002F\u002Fmlforsystems.org\u002Fassets\u002Fpapers\u002Fneurips2024\u002Fpaper41.pdf)\n- [ML for Sys workshop @ NeurIPS'24] [TurboMoE: Enhancing MoE Model Training with Smart Kernel-Fusion and Data Transformation](https:\u002F\u002Fopenreview.net\u002Fforum?id=huy8g3iKy0)\n- [arxiv'24] [Dense Backpropagation Improves Routing for Sparsely-Gated Mixture-of-Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=huy8g3iKy0)\n- [arxiv'24] [MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11217)\n- [arxiv'24] [Pro-Prophet: Systematic Load Balancing Method for Efficient Parallel Training of Large-scale MoE Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10003)\n- [EMNLP'24] [MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18035)\n- [EMNLP'24] [Mixture of Diverse Size Experts](https:\u002F\u002Faclanthology.org\u002F2024.emnlp-industry.118\u002F)\n- [EMNLP'24] [AdaMOE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models](https:\u002F\u002Faclanthology.org\u002F2024.findings-emnlp.361.pdf)\n- [ACL'24] [Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models](https:\u002F\u002Faclanthology.org\u002F2024.acl-long.334\u002F)\n- [ACL'24] [SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15030)\n- [SoCC'24] [MoEsaic: Shared Mixture of Experts](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3698038.3698521)\n- [KDD'24] [Efficient Mixture of Experts based on Large Language Models for Low-Resource Data Preprocessing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3637528.3671873)\n- [arxiv'24] [Pipeline MoE: A Flexible MoE Implementation with Pipeline Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11414)\n- [IPDPS'24] [Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08383)\n- [arxiv'24] [EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12247)\n- [arxiv'24] [Shortcut-connected Expert Parallelism for Accelerating Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05019)\n- [NeurIPS'24] [Toward Efficient Inference for Mixture of Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=stXtBqyTWX)\n- [arxiv'24] [Lynx: Enabling Efficient MoE Inference through Dynamic Batch-Aware Expert Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08982)\n- [MLSys'24] [SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F698cfaf72a208aef2e78bcac55b74328-Abstract-Conference.html)\n- [SC'24] [APTMoE: Affinity-Aware Pipeline Tuning for MoE Models on Bandwidth-Constrained GPU Nodes](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100b436\u002F21HUWvO6IIo)\n- [NeurIPS'24] [GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=QtYg4g3Deu)\n- [arxiv'24] [HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01433)\n- [arxiv'24] [Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02265)\n- [NeurIPS'24] [LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing](https:\u002F\u002Fopenreview.net\u002Fforum?id=bjFhVbky5A)\n- [arxiv'24] [Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02265)\n- [arxiv'24] [Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11273)\n- [NeurIPS'24] [Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design](https:\u002F\u002Fopenreview.net\u002Fforum?id=i8JaxY7tDI)\n- [arxiv'24] [ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17954v1)\n- [arxiv'24] [Demystifying the Compression of Mixture-of-Experts Through a Unified Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02500)\n- [PML4LRS @ ICLR'24] [Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07033)\n- [arxiv'24] [Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17043)\n- [arxiv'24] [MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12013)\n- [arxiv'24] [Duo-LLM: A Framework for Studying Adaptive Computation in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10846)\n- [arxiv'24] [MoH: Multi-Head Attention as Mixture-of-Head Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11842v1)\n- [arxiv'24] [AT-MoE: Adaptive Task-planning Mixture of Experts via LoRA Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10896v1)\n- [NeurIPS'24 (Splotlight)] [Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08245v1)\n- [arxiv'24] [Aria: An Open Multimodal Native Mixture-of-Experts Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05993)\n- [arxiv'24] [MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06270)\n- [arxiv'24] [MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07348)\n- [arxiv'24] [Upcycling Large Language Models into Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07524)\n- [arxiv'24] [No Need to Talk: Asynchronous Mixture of Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03529)\n- [arxiv'24] [Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04656)\n- [arxiv'24] [HMoE: Heterogeneous Mixture of Experts for Language Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.10681)\n- [arxiv'24] [FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11304v1)\n- [arxiv'24] [AquilaMoE: Efficient Training for MoE Models with Scale-Up and Scale-Out Strategies](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.06567)\n- [arxiv'24] [Layerwise Recurrent Router for Mixture-of-Experts](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.06793)\n- [arxiv'24] [Partial Experts Checkpoint: Efficient Fault Tolerance for Sparse Mixture-of-Experts Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04307)\n- [SRW @ ACL'24] [MoExtend: Tuning New Experts for Modality and Task Extension](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03511v1)\n- [arxiv'24] [MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01505)\n- [arxiv'24] [Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00945)\n- [arxiv'24] [Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.12034)\n- [arxiv'24] [Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06563)\n- [ICML'24] [Scaling Laws for Fine-Grained Mixture of Experts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=yoqdlynCRs)\n- [ICML'24] [Scaling Beyond the GPU Memory Limit for Large Mixture-of-Experts Model Training](https:\u002F\u002Fopenreview.net\u002Fforum?id=uLpyWQPyF9)\n- [MLSys'24] [QMoE: Sub-1-Bit Compression of Trillion-Parameter Models](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002Fc74b624843218d9b6713fcf299d6d5e4-Paper-Conference.pdf)\n- [MLSys'24] [Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F339caf45a6fa281cae8adc6465343464-Paper-Conference.pdf)\n- [arxiv'24] [CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05949)\n- [arxiv'24] [AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00361)\n- [SIGIR'24] [M3oE: Multi-Domain Multi-Task Mixture-of Experts Recommendation Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18465)\n- [EuroSys'24] [ScheMoE: An Extensible Mixture-of-Experts Distributed Training System with Tasks Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3650083)\n- [arxiv'24] [MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15159)\n- [ICLR'24] [Mixture of LoRA Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=uWvKBCYh4S)\n- [arxiv'24] [Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07816)\n- [arxiv'24] [MoE-Infinity: Activation-Aware Expert Offloading for Efficient MoE Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14361)\n- [IJCAI'24] [LocMoE: A Low-overhead MoE for Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.13920)\n- [ISCA'24] [Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12066)\n- [IPDPS'23] [MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177396)\n- [EMNLP'23] [Adaptive Gating in Mixture-of-Experts based Language Models](https:\u002F\u002Faclanthology.org\u002F2023.emnlp-main.217\u002F)\n- [ACL'23] [AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation](https:\u002F\u002Faclanthology.org\u002F2023.findings-acl.580\u002F)\n- [ICLR'23] [Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints](https:\u002F\u002Fopenreview.net\u002Fforum?id=T5nUQDrM4u)\n- [ICML'23] [Brainformers: Trading Simplicity for Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00008)\n- [arxiv'23] [Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06182)\n- [arxiv'23] [Fast Inference of Mixture-of-Experts Language Models with Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17238)\n- [ATC'23] [Accelerating Distributed MoE Training and Inference with Lina](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc23\u002Fpresentation\u002Fli-jiamin)\n- [ATC'23] [SmartMoE: Efficiently Training Sparsely-Activated Models through Combining Offline and Online Parallelization](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc23\u002Fpresentation\u002Fzhai)\n- [OSDI'23] [Optimizing Dynamic Neural Networks with Brainstorm](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Fcui)\n- [SIGMOD'23] FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement\n- [ICS'23] A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training\n- [MLSys'23] [MegaBlocks: Efficient Sparse Training with Mixture-of-Experts](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F5a54f79333768effe7e8927bcccffe40-Abstract-mlsys2023.html)\n- [MLSys'23] [Tutel: Adaptive Mixture-of-Experts at Scale](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F9412531719be7ccf755c4ff98d0969dc-Abstract-mlsys2023.html)\n- [arxiv'22] [ST-MoE: Designing Stable and Transferable Sparse Expert Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08906)\n- [PPoPP'22] [FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3503221.3508418)\n- [SustaiNLP @ EMNLP'22] [Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.10017)\n- [NeurIPS'22] [Mixture-of-Experts with Expert Choice Routing](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F2f00ecd787b432c1d36f3de9800728eb-Abstract-Conference.html)\n- [ICML'22] [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05596)\n- [ICML'22] [GLaM: Efficient Scaling of Language Models with Mixture-of-Experts](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fdu22c\u002Fdu22c.pdf)\n- [JMLR'22] [Switch transformers: scaling to trillion parameter models with simple and efficient sparsity](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F3586589.3586709)\n- [EMNLP'21] [Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference](https:\u002F\u002Faclanthology.org\u002F2021.findings-emnlp.304\u002F)\n- [ICLR'17] [Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer](https:\u002F\u002Fopenreview.net\u002Fforum?id=B1ckMDqlg)\n\n## Communication Optimization & Network Infrastructure for Distributed ML\n- [arxiv'26] [HetCCL: Accelerating LLM Training with Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22585)\n- [PPoPP'26] [COCCL: A Collective Communication Library Supporting Easy Integration and Configuration of Customized Compression for Scalable LLM Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786432)\n- [arxiv'26] [AutoOverlap: Enabling Fine-Grained Overlap of Computation and Communication with Chunk-Based Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20595)\n- [arxiv'26] [Heterogeneous Low-Bandwidth Pre-Training of LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.02360)\n- [EuroSys'26] [Efficient and Adaptable Overlapping for Computation and Communication via Signaling and Reordering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19519)\n- [arxiv'25] [Analyzing Communication Predictability in LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24750)\n- [arxiv'25] [UCCL-EP: Portable Expert-Parallel Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19849)\n- [arxiv'25] [Design Space Exploration of DMA based Finer-Grain Compute Communication Overlap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10236)\n- [arxiv'25] [Training Foundation Models on a Full-Stack AMD Platform: Compute, Networking, and System Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17127)\n- [arxiv'25] [FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11505)\n- [arxiv'25] [GPU-Initiated Networking for NCCL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15076)\n- [SC workshop'25] [Redesigning GROMACS Halo Exchange: Improving Strong Scaling with GPU-initiated NVSHMEM](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731599.3767508)\n- [SC'25] [Understanding Communication Bottlenecks in Multi-node LLM Inference](https:\u002F\u002Fsc25.supercomputing.org\u002Fproceedings\u002Fposters\u002Fposter_files\u002Fpost253s2-file3.pdf)\n- [SC'25] [CPU- and GPU-initiated Communication Strategies for Conjugate Gradient Methods on Large GPU Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3712285.3759774)\n- [SC'25] [SDR-RDMA: Software-Defined Reliability Architecture for Planetary Scale RDMA Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05366)\n- [HotNets'25] [Photonic Rails in ML Datacenters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.08119)\n- [arxiv'25] [DMA Collectives for Efficient ML Communication Offloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06605)\n- [arxiv'25] [Collective Communication for 100k+ GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20171)\n- [arxiv'25] [Uno: A One-Stop Solution for Inter- and Intra-Datacenter Congestion Control and Reliable Connectivity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15802)\n- [SOSP'25] [Mycroft: Tracing Dependencies in Collective Communication Towards Reliable LLM Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731569.3764848)\n- [MICRO'25] [SuperMesh: Energy-Efficient Collective Communications for Accelerators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756085)\n- [MICRO'25] [SkipReduce: (Interconnection) Network Sparsity to Accelerate Distributed Machine Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756092)\n- [MICRO'25] [Optimizing All-to-All Collective Communication with Fault Tolerance on Torus Networks](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756057)\n- [arxiv'25] [MSCCL++: Rethinking GPU Communication Abstractions for Cutting-edge AI Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09014)\n- [arxiv'25] [Toward Co-adapting Machine Learning Job Shape and Cluster Topology](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.03891)\n- [APNET'25] [Rethinking Dynamic Networks and Heterogeneous Computing with Automatic Parallelization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3735358.3735382)\n- [arxiv'25] [Efficient AllReduce with Stragglers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23523)\n- [arxiv'25] [TASP: Topology-aware Sequence Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26541)\n- [NAIC @ SIGCOMM'25] [Chronos: Prescheduled circuit switching for LLM training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3748273.3749210)\n- [arxiv'25] [Bine Trees: Enhancing Collective Operations by Optimizing Communication Locality](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17311)\n- [SIGCOMM'25] [Falcon: A Reliable, Low Latency Hardware Transport](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3718958.3754353)\n- [SIGCOMM'25] [ByteScale: Communication-Efficient Scaling of LLM Training with a 2048K Context Length on 16384 GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3718958.3754352)\n- [SIGCOMM'25] [From ATOP to ZCube: Automated Topology Optimization Pipeline and A Highly Cost-Effective Network Topology for Large Model Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3718958.3750503)\n- [SIGCOMM'25] [Astral: A Datacenter Infrastructure for Large Language Model Training at Scale](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3718958.3750521)\n- [SIGCOMM'25] [ResCCL: Resource-Efficient Scheduling for Collective Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09591)\n- [OSDI'25] [ZEN: Empowering Distributed Training with Sparsity-driven Data Synchronization](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fwang-zhuang)\n- [OSDI'25] [Enabling Efficient GPU Communication over Multiple NICs with FuseLink](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fren)\n- [arxiv'25] [RoCE BALBOA: Service-enhanced Data Center RDMA for SmartNICs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.20412)\n- [arxiv'25] [RailX: A Flexible, Scalable, and Low-Cost Network Architecture for Hyper-Scale LLM Training Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.18889)\n- [arxiv'25] [Demystifying NCCL: An In-depth Analysis of GPU Communication Protocols and Algorithms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04786)\n- [APNET'25] [Congestion Control for AI Workloads with Message-Level Signaling](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3735358.3735378)\n- [ASPLOS'25] [Concerto: Automatic Communication Optimization and Scheduling for Large-Scale Deep Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707223)\n- [ISCA'25] [Chimera: Communication Fusion for Hybrid Parallelism in Large Language Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3731025)\n- [arxiv'25] [NoLoCo: No-all-reduce Low Communication Training Method for Large Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10911)\n- [arxiv'25] [TokenWeave: Efficient Compute-Communication Overlap for Distributed LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11329)\n- [arxiv'25] [FLASH: Fast All-to-All Communication in GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09764)\n- [arxiv'25] [MCMComm: Hardware-Software Co-Optimization for End-to-End Communication in Multi-Chip-Modules](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00041)\n- [arxiv'25] [GenTorrent: Scaling Large Language Model Serving with An Overley Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20101)\n- [arxiv'25] [Triton-distributed: Programming Overlapping Kernels on Distributed AI Systems with the Triton Compiler](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19442)\n- [arxiv'25] [FlashOverlap: A Lightweight Design for Efficiently Overlapping Communication and Computation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19519)\n- [arxiv'25] [An Extensible Software Transport Layer for GPU Networking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17307) (`UCCL`) [[Code](https:\u002F\u002Fgithub.com\u002Fuccl-project\u002Fuccl)]\n- [HPCA'25] [Enhancing Large-Scale AI Training Efficiency: The C4 Solution for Real-Time Anomaly Detection and Communication Optimization](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fhpca\u002F2025\u002F064700b246\u002F25Ko2hVHEEo)\n- [arxiv'25] [HeteroPod: XPU-Accelerated Infrastructure Offloading for Commodity Cloud-Native Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.23952)\n- [Survey :mag:] [arxiv'25] [GPU-centric Communication Schemes for HPC and ML Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.24230v1)\n- [EuroMLSys'25] [TAGC: Optimizing Gradient Communication in Distributed Transformer Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3721146.3721946)\n- [arxiv'25] [UB-Mesh: a Hierarchically Localized nD-FullMesh Datacenter Network Architecture](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20377)\n- [MLSys'25] [TileLink: Generating Efficient Compute-Communication Overlapping Kernels using Tile-Centric Primitives](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20313)\n- [arxiv'25] [Communication-Efficient Language Model Training Scales Reliably and Robustly: Scaling Laws for DiLoCo](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09799)\n- [NSDI'25] [AutoCCL: Automated Collective Communication Tuning for Accelerating Distributed and Parallel DNN Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fxu-guanbin)\n- [NSDI'25] [Efficient Direct-Connect Topologies for Collective Communications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.03356)\n- [arxiv'25] [InfinitePOD: Building Datacenter-Scale High-Bandwidth Domain for LLM with Optical Circuit Switching Transceivers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03885)\n- [IEEE MICRO'25] [Understanding and Characterizing Communication Characteristics for Distributed Transformer Models](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fmagazine\u002Fmi\u002F5555\u002F01\u002F10849609\u002F23IcYe8Lr5m)\n- [arxiv'25] [In-Network Preprocessing of Recommender Systems on Multi-Tenant SmartNICs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12032)\n- [arxiv'25] [Scaling Large Language Model Training on Frontier with Low-Bandwidth Partitioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04266)\n- [arxiv'25] [The Power of Negative Zero: Datatype Customization for Quantized Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04052)\n- [arxiv'25] [mFabric: An Efficient and Scalable Fabric for Mixture-of-Experts Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.03905)\n- [NSDI'25] [OptiReduce: Resilient and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06993)\n- [APNET'24] [Understanding Communication Characteristics of Distributed Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3663408.3663409)\n- [arxiv'24] [TokenRing: An Efficient Parallelism Framework for Infinite-Context LLMs via Bidirectional Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20501)\n- [arxiv'24] [The Landscape of GPU-Centric Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09874v2)\n- [arxiv'24] [Revisiting the Time Cost Model of AllReduce](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04202)\n- [arxiv'24] [LuWu: An End-to-End In-Network Out-of-Core Optimizer for 100B-Scale Model-in-Network Data-Parallel Training on Distributed GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00918)\n- [HotInfra'24] [Immediate Communication for Distributed AI Tasks](https:\u002F\u002Fhotinfra24.github.io\u002Fpapers\u002Fhotinfra24-final2.pdf)\n- [NeurIPS'24] [SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training](https:\u002F\u002Fopenreview.net\u002Fforum?id=PEEqnXlSCk)\n- [SC'24] [Optimizing Distributed ML Communication with Fused Computation-Collective Operations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.06942)\n- [SC'24] Network-Offloaded Bandwidth-Optimal Broadcast and Allgather for Distributed AI\n- [NeurIPS'24] [LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing](https:\u002F\u002Fopenreview.net\u002Fforum?id=bjFhVbky5A)\n- [arxiv'24] [LumosCore: Highly Scalable LLM Clusters with Optical Interconnect](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01503)\n- [TPDS'24] [AutoDDL: Automatic Distributed Deep Learning With Near-Optimal Bandwidth Cost](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06813)\n- [HOTI'24] [Unified Collective Communication (UCC): An Unified Library for CPU, GPU, and DPU Collectives](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10664373)\n- [HOTI'24] [Rail-only: A Low-Cost High-Performance Network for Training LLMs with Trillion Parameters](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10664412)\n- [SC'24] [Switch-Less Dragonfly on Wafers: A Scalable Interconnection Architecture based on Wafer-Scale Integration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10290)\n- [HPDC'24] [Near-Optimal Wafer-Scale Reduce](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3625549.3658693)\n- [HPDC'24] [Efficient all-to-all Collective Communication Schedules for Direct-connect Topologies](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3625549.3658656)\n- [arxiv'24] [HiCCL: A Hierarchical Collective Communication Library](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.05962)\n- [ICS'24] [gZCCL: Compression-Accelerated Collective Communication Framework for GPU Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3650200.3656636)\n- [ICS'24] [Snoopie: A Multi-GPU Communication Profiler and Visualizer](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3650200.3656597)\n- [arxiv'24] [CSPS: A Communication-Efficient Sequence-Parallelism based Serving System for Transformer based Models with Long Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15104)\n- [arxiv'24] [Domino: Eliminating Communication in LLM Training via Generic Tensor Slicing and Overlapping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15241)\n- [arxiv'24] [Exploring GPU-to-GPU Communication: Insights into Supercomputer Interconnects](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14090v1)\n- [arxiv'24] [Demystifying the Communication Characteristics for Distributed Transformer Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.10197)\n- [ICPP'24] [Sparse Gradient Communication with AlltoAll for Accelerating Distributed Deep Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673140)\n- [NAIC @ SIGCOMM'24] [Proof-of-Concept of a Flexible and High-Fidelity Approach to Distributed DNN Training Emulation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3672198.3673793)\n- [NAIC @ SIGCOMM'24] [Eloquent: A More Robust Transmission Scheme for LLM Token Streaming](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3672198.3673797)\n- [NAIC @ SIGCOMM'24] [OmNICCL: Zero-cost Sparse AllReduce with Direct Cache Access and SmartNICs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3672198.3673804)\n- [HotNets'24] [I've Got 99 Problems But FLOPS Ain't One](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3696348.3696893)\n- [HotNets'24] [MLTCP: A Distributed Technique to Approximate Centralized Flow Scheduling For Machine Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696348.3696878)\n- [HotNets'22] [Congestion Control in Machine Learning Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3563766.3564115)\n- [SIGCOMM'24] [Rethinking Machine Learning Collective Communication as a Multi-Commodity Flow Problem](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3651890.3672249)\n- [SIGCOMM'24] [RDMA over Ethernet for Distributed Training at Meta Scale](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672233)\n- [SIGCOMM'24] [Accelerating Model Training in Multi-cluster Environments with Consumer-grade GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672228)\n- [SIGCOMM'24] [MCCS: A Service-based Approach to Collective Communication for Multi-Tenant Cloud](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672252)\n- [SIGCOMM'24] [Crux: GPU-Efficient Communication Scheduling for Deep Learning Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672239)\n- [arxiv'24] [MLTCP: Congestion Control for DNN Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09589)\n  - [HotNets'24] [MLTCP: A Distributed Technique to Approximate Centralized Flow Scheduling For Machine Learning](https:\u002F\u002Fpeople.csail.mit.edu\u002Fghobadi\u002Fpapers\u002Fmltcp_hotnets_2024.pdf)\n- [arxiv'24] [ForestColl: Efficient Collective Communications on Heterogeneous Network Fabrics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06787)\n- [APNet'24] [Understanding Communication Characteristics of Distributed Training](https:\u002F\u002Fpeople.csail.mit.edu\u002Fzhizhenzhong\u002Fpapers\u002F2024_APNET.pdf)\n- [ICLR'24] [ZeRO++: Extremely Efficient Collective Communication for Large Model Training](https:\u002F\u002Fopenreview.net\u002Fforum?id=gx2BT0a9MQ)\n- [ICLR'24] CO2: Efficient Distributed Training with Full Communication-Computation Overlap\n  - [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16265)] [[openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZO5cn4IfaN)]\n- [MLSys'24] [L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep Learning](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F9069a8976ff06f6443e7f4172990a580-Paper-Conference.pdf)\n- [MLSys'24] [Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F339caf45a6fa281cae8adc6465343464-Paper-Conference.pdf)\n- [ASPLOS'24] [T3: Transparent Tracking & Triggering for Fine-grained Overlap of Compute & Collectives](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3620665.3640410)\n- [ASPLOS'24] [TCCL: Discovering Better Communication Paths for PCIe GPU Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651362)\n- [ASPLOS'24] [Centauri: Enabling Efficient Scheduling for Communication-Computation Overlap in Large Model Training via Communication Partitioning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651379)\n- [ASPLOS'24] [Two-Face: Combining Collective and One-Sided Communication for Efficient Distributed SpMM](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3620665.3640427)\n- [NSDI'24] [THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fli-minghao)\n- [Survey :mag:] [arxiv'23] [Communication-Efficient Distributed Deep Learning: A Comprehensive Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06307)\n- [arxiv'23] [Optimized Network Architectures for Large Language Model Training with Billions of Parameters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12169)\n- [arxiv'23] [FlexShard: Flexible Sharding for Industry-Scale Sequence Recommendation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02959)\n- [arxiv'23] [Rethinking Memory and Communication Cost for Efficient Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06003)\n- [arxiv'23] [Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13254)\n- [arxiv'23] [Optimized Network Architectures for Large Language Model Training with Billions of Parameters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12169)\n- [arxiv'23] [TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05301)\n- [INFOCOM'23] [Libra: Contention-Aware GPU Thread Allocation for Data Parallel Training in High Speed Networks](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10228922)\n- [ICDCS'23] [bbTopk: Bandwidth-Aware Sparse Allreduce with Blocked Sparsification for Efficient Distributed Training](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10272502)\n- [ICML'23] [CocktailSGD: Fine-tuning Foundation Models over 500Mbps Networks](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fwang23t\u002Fwang23t.pdf)\n  - Related to DT-FM (NeurIPS'22)\n- [IPDPS'23] [MCR-DL: Mix-and-Match Communication Runtime for Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08374)\n- [ASPLOS'23] [MSCCLang: Microsoft Collective Communication Language](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3575693.3575724)\n- [ASPLOS'23] [Overlap Communication with Dependent Computation via Decomposition in Large Deep Learning Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3567955.3567959)\n- [EuroSys'23] A2TP: Aggregator-aware In-network Aggregation for Multi-tenant Learning\n- [MLSys'23] Cupcake: A Compression Optimizer for Scalable Communication-Efficient Distributed Training\n- [MLSys'23] On Optimizing the Communication of Model Parallelism\n- [NSDI'23] TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Training Jobs\n- [NSDI'23] Better Together: Jointly Optimizing ML Collective Scheduling and Execution Planning using SYNDICATE\n- [NSDI'23] TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches\n- [NSDI'23] [ARK: GPU-driven Code Execution for Distributed Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi23\u002Fpresentation\u002Fhwang)\n- [EuroSys'22] [Out-of-order backprop: an effective scheduling technique for deep learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3492321.3519563)\n- [ISCA'22] Themis: a network bandwidth-aware collective scheduling policy for distributed training of DL models\n- [ISCA'22] [Software-hardware co-design for fast and scalable training of deep learning recommendation models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3470496.3533727)\n- [SC'22] HammingMesh: A Network Topology for Large-Scale Deep Learning\n- [PPoPP'22] Near-optimal sparse allreduce for distributed deep learning\n- [MLSys'22] [Synthesizing optimal parallelism placement and reduction strategies on hierarchical systems for deep learning](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Ff0f9e98bc2e2f0abc3e315eaa0d808fc-Abstract.html) (`P^2`)\n- [ASPLOS'22] [Breaking the Computation and Communication Abstraction Barrier in Distributed Machine Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.05720) (`CoCoNET`)\n- [EuroSys'21] [DGCL: an efficient communication library for distributed GNN training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3447786.3456233)\n- [ICLR'21] [Multi-Level Local SGD for Heterogeneous Hierarchical Networks](https:\u002F\u002Fopenreview.net\u002Fforum?id=C70cp4Cn32)\n- [SIGMOD'21] Heterogeneity-Aware Distributed Machine Learning Training via Partial Reduce [also in [2.5](#25-parallelism--distributed-training)]\n- [SC'21] Flare: flexible in-network allreduce\n- [NSDI'21] [Scaling Distributed Machine Learning with In-Network Aggregation](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi21\u002Fpresentation\u002Fsapio)\n- [ISCA'21] [Enabling compute-communication overlap in distributed deep learning training platforms](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1109\u002FISCA52012.2021.00049)\n- [PPoPP'21] [Synthesizing optimal collective algorithms](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3437801.3441620) (`SCCL`)\n- [SIGCOMM'21] [SiP-ML: High-Bandwidth Optical Network Interconnects for Machine Learning Training](https:\u002F\u002Fpeople.csail.mit.edu\u002Fghobadi\u002Fpapers\u002Fsipml_sigcomm_2021.pdf)\n- [ISCA'20] [An in-network architecture for accelerating shared-memory multiprocessor collectives](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1109\u002FISCA45697.2020.00085)\n- [NeurIPS'20] [Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F5f0ad4db43d8723d18169b2e4817a160-Abstract.html)\n- [PPoPP'20] Taming unbalanced training workloads in deep learning with partial collective operations\n- [MLSys'20] [Blink: Fast and Generic Collectives for Distributed ML](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2020\u002Fhash\u002Fcd3a9a55f7f3723133fa4a13628cdf03-Abstract.html)\n- [MLSys'20] [PLink: Discovering and Exploiting Datacenter Network Locality for Efficient Cloud-based Distributed Training](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2020\u002Fhash\u002Feca986d585a03890a412587a2f5ccb43-Abstract.html)\n- [OSDI'20] [A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU\u002FCPU Clusters](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fjiang) (`BytePS`)\n- [MLSys'19] [Priority-based Parameter Propagation for Distributed DNN Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.03960) (`P3`)\n- [MLSys'19] TicTac: Accelerating Distributed Deep Learning with Communication Scheduling\n- [SOSP'19] A generic communication scheduler for distributed DNN training acceleration (`ByteScheduler`)\n- [ATC'17] Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters\n\n## Fault tolerance & Straggler mitigation\n- [arxiv'26] [Towards Resiliency in Large Language Model Serving with KevlarFlow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22438)\n- [arxiv'26] [Training LLMs with Fault Tolerant HSDP on 100,000 GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.00277)\n- [PPoPP'26] [CCL-D: A High-Precision Diagnostic System for Slow and Hang Anomalies in Large-Scale Model Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786429)\n- [PPoPP'26] [Elastor: Elastic and Efficient Model Partitioning and Checkpointing for Fault-Tolerant Distributed Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786445)\n- [arxiv'26] [Making MoE-based LLM Inference Resilient with Tarragon](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01310)\n- [NSDI'26] [Attack of the Bubbles: Straggler-Resilient Pipeline Parallelism for Large Model Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Fwu-tianyuan)\n- [arxiv'25] [TTrace: Lightweight Error Checking and Diagnosis for Distributed Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09280)\n- [arxiv'25] [Reliable and Resilient Collective Communication Library for LLM Training and Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.25059)\n- [arxiv'25] [SHIFT: An RDMA Failure-Resilient Layer for Distributed Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11094)\n- [arxiv'25] [FFTrainer: Fast Failover in Large-Language Model Training with Almost-Free State Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03644)\n- [arxiv'25] [FailSafe: High-performance Resilient Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14116)\n- [arxiv'25] [GoCkpt: Gradient-Assisted Multi-Step overlapped Checkpointing for Efficient LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07035)\n- [MICRO'25] [Optimizing All-to-All Collective Communication with Fault Tolerance on Torus Networks](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756057)\n- [APSys'25] [Indispensable CPU-centric Checkpointing for GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3725783.3764394)\n- [CLUSTER'25] [Capricorn: Efficient In-Memory Checkpointing for MoE Model Training with Dynamicity Awareness](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186488\u002F2aCq0F0jiTu)\n- [arxiv'25] [MoE-PHDS: One MoE checkpoint for flexible runtime sparsity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23012)\n- [arxiv'25] [ElasWave: An Elastic-Native System for Scalable Hybrid-Parallel Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00606)\n- [arxiv'25] [Efficient AllReduce with Stragglers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23523)\n- [SOSP'25] [Mycroft: Tracing Dependencies in Collective Communication Towards Reliable LLM Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731569.3764848)\n- [SOSP'25] [Robust LLM Training Infrastructure at ByteDance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.16293)\n- [SC'25] [LowDiff: Efficient Frequent Checkpointing via Low-Cost Differential for High-Performance Distributed Training Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04084)\n- [OSDI'25] [Understanding Stragglers in Large Model Training Using What-if Analysis](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Flin-jinkun)\n- [SIGMOD'25] [Malleus: Straggler-Resilient Hybrid Parallel Training of Large-scale Models via Malleable Data and Model Parallelization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13333)\n- [arxiv'25] [Checkmate: Zero-Overhead Model Checkpointing via Network Gradient Replication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13522)\n- [ATC'25] [SAVE: Software-Implemented Fault Tolerance for Model Inference against GPU Memory Bit Flips](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fzheng)\n- [ATC'25] [Universal Checkpointing: A Flexible and Efficient Distributed Checkpointing System for Large-Scale DNN Training with Reconfigurable Parallelism](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Flian)\n- [arxiv'25] [Adaptra: Straggler-Resilient Hybrid-Parallel Training with Pipeline Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19232v1)\n- [arxiv'25] [Nonuniform-Tensor-Parallelism: Mitigating GPU failure impact for Scaled-up LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06095)\n- [arxiv'25] [Characterizing GPU Resilience and Impact on AI\u002FHPC Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11901)\n- [NSDI'25] [BCP: A Unified Checkpointing System for Large Foundation Model Development](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fwan-borui)\n- [NSDI'25] [Minder: Faulty Machine Detection for Large-scale Distributed Model Training](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fdeng)\n- [EuroSys'25] [SkyServe: Serving AI Models across Regions and Clouds with Spot Instances](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01438)\n- [ASPLOS'25] [PCcheck: Persistent Concurrent Checkpointing for ML](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707255)\n- [arxiv'24] [FALCON: Pinpointing and Mitigating Stragglers for Large-Scale Hybrid-Parallel Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12588)\n- [arxiv'24] [MoEtion: Efficient and Reliable Checkpointing for Mixture-of-Experts Models at Scale](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2412.15411)\n- [arxiv'24] [MoC-System: Efficient Fault Tolerance for Sparse Mixture-of-Experts Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04307)\n- [arxiv'24] [TrainMover: Efficient ML Training Live Migration with No Memory Overhead](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2412.12636)\n- [arxiv'24] [Cloud Atlas: Efficient Fault Localization for Cloud Systems using Language Models and Causal Insight](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08694)\n- [arxiv'24] [ByteCheckpoint: A Unified Checkpointing System for Large Foundation Model Development](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20143)\n- [arxiv'24] [Universal Checkpointing: Efficient and Flexible Checkpointing for Large Scale Distributed Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18820)\n- [arxiv'24] [Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04656)\n- [arxiv'24] [PARALLELGPUOS: A Concurrent OS-level GPU Checkpoint and Restore System using Validated Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12079)\n- [SOSP'24] [ReCycle: Resilient Training of Large DNNs using Pipeline Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14009)\n- [HPDC'24] [DataStates-LLM: Lazy Asynchronous Checkpointing for Large Language Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3625549.3658685)\n- [EuroSys'24] [Just-In-Time Checkpointing: Low Cost Error Recovery from Deep Learning Training Failures](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3650085)\n- [NSDI'24] [Parcae: Proactive, Liveput-Optimized DNN Training on Preemptible Instances](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fduan)\n- [arxiv'23] [Unicron: Economizing Self-Healing LLM Training at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00134)\n- [VLDB'23] [Eficient Fault Tolerance for Recommendation Model Training via Erasure Coding](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3611479.3611514)\n- [SOSP'23] [GEMINI: Fast Failure Recovery in Distributed Training with In-Memory Checkpoints](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3600006.3613145)\n- [SOSP'23] [Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3600006.3613152)\n- [NSDI'23] Bamboo: Making Preemptible Instances Resilient for Affordable Training of Large DNNs\n- [EuroSys'22] Varuna: scalable, low-cost training of massive deep learning models\n- [ATC'22] Sibylla: To Retry or Not To Retry on Deep Learning Job Failure\n- [MLSys'21] [Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2021\u002Fhash\u002Ff0f9e98bc2e2f0abc3e315eaa0d808fc-Abstract.html)\n- [FAST'21] CheckFreq: Frequent, Fine-Grained DNN Checkpointing\n- [ICSE'20] An Empirical Study on Program Failures of Deep Learning Jobs\n\n## GPU Memory Management & Optimization\n- [SC'25] [HELM: Characterizing Unified Memory Accesses to Improve GPU Performance under Memory Oversubscription](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759812)\n- [SC'25] [MLP-Offload: Multi-Level, Multi-Path Offloading for LLM Pre-training to Break the GPU Memory Wall](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02480)\n- [arxiv'25] [CARMA: Collocation-Aware Resource Manager with GPU Memory Estimator](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19073)\n- [arxiv'25] [Reducing GPU Memory Fragmentation via Spatio-Temporal Planning for Efficient Large-Scale Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.16274)\n- [ISCA'25] [Forest: Access-aware GPU UVM Management](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731047)\n- [EuroSys'25] [MEPipe: Democratizing LLM Training with Memory-Efficient Slice-Level Pipeline Scheduling on Cost-Effective Accelerators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3717469)\n- [EuroSys'25] [Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19050)\n- [FAST'25 WiP] Baton: Orchestrating GPU Memory for LLM Training on Heterogeneous Cluster\n- [CGO'25] [IntelliGen: Instruction-Level Auto-tuning for Tensor Program with Monotonic Memory Optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3696443.3708967)\n- [arxiv'25] [Memory Analysis on the Training Course of DeepSeek Models](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.07846)\n- [IJCAI'24] [LLMem: Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLMs](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2024\u002F0699.pdf)\n- [MICRO'24] [SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07518v2)\n- [arxiv'24] [Accelerating Large Language Model Training with 4D Parallelism and Memory Consumption Estimator](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06465)\n- [TACO'24] [ATP: Achieving Throughput Peak for DNN Training via Smart GPU Memory Management](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3701996)\n- [ICML'24] [GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection](https:\u002F\u002Fopenreview.net\u002Fforum?id=hYHsrKDiX7)\n- [ASPLOS'24] [GMLake: Efficient and Transparent GPU Memory Defragmentation for Large-scale DNN Training with Virtual Memory Stitching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08156)\n- [arxiv'23] [Rethinking Memory and Communication Cost for Efficient Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06003)\n- [arxiv'23] Quantized Distributed Training of Large Models with Convergence Guarantees (`QSDP`)\n- [arxiv'23] Does compressing activations help model parallel training?\n- [SoCC'23] Towards GPU Memory Efficiency for Distributed Training at Scale\n- [VLDB'23] [PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp3848-huang.pdf)\n- [SOSP'23] Efficient Memory Management for Large Language Model Serving with PagedAttention\n- [HPCA'23] MPress: Democratizing Billion-Scale Model Training on Multi-GPU Servers via Memory-Saving Inter-Operator Parallelism\n- [HPCA'23] Tensor Movement Orchestration in Multi-GPU Training Systems\n- [IJCAI'23] [OSDP: Optimal Sharded Data Parallel for Distributed Deep Learning](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2023\u002F0238.pdf)\n- [ICLR'22] [LoRA: Low-Rank Adaptation of Large Language Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=nZeVKeeFYf9)\n  - algorithmic method for memory efficiency\n- [VLDB'22] Harmony: Overcoming the Hurdles of GPU Memory Capacity to Train Massive DNN Models on Commodity Servers\n- [ATC'21] ZeRO-Offload: Democratizing Billion-Scale Model Training\n- [ICLR'21] ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training\n- [ICLR'21] Dynamic Tensor Rematerialization\n- [SC'21] ZeRO-infinity: breaking the GPU memory wall for extreme scale deep learning\n- [HPCA'21] [Sentinel: Efficient Tensor Migration and Allocation on Heterogeneous Memory Systems for Deep Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9407112)\n- [MLSys'20] Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization\n- [ASPLOS'20] Capuchin: Tensor-based GPU Memory Management for Deep Learning\n- [ASPLOS'20] SwapAdvisor: Pushing Deep Learning Beyond the GPU Memory Limit via Smart Swapping\n- [ESEC\u002FFSE'20] Estimating GPU memory consumption of deep learning models\n- [SC'20] ZeRO: memory optimizations toward training trillion parameter models\n- [ISCA'18] Gist: Efficient Data Encoding for Deep Neural Network Training\n- [PPoPP'18] Superneurons: dynamic GPU memory management for training deep neural networks\n- [MICRO'16] vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design\n- [arxiv'16] Training Deep Nets with Sublinear Memory Cost\n\n## GPU Sharing\n- [arxiv'25] [MSched: GPU Multitasking via Proactive Memory Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24637)\n- [SC workshop'25] [WAGES: Workload-Aware GPU Sharing System for Energy-Efficient Serverless LLM Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767396)\n- [SOSP'25] [LithOS: An Operating System for Efficient Machine Learning on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731569.3764818)\n- [arxiv'25] [Towards Efficient and Practical GPU Multitasking in the Era of LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08448)\n- [arxiv'25] [Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.04021)\n- [OSDI'25] [XSched: Preemptive Scheduling for Diverse XPUs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fshen-weihang)\n- [EuroSys'25] [Improving GPU Sharing Performance through Adaptive Bubbleless Spatial-Temporal Sharing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3689031.3696070)\n- [PPOPP'25] [SGDRC: Software-Defined Dynamic Resource Control for Concurrent DNN Inference on NVIDIA GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3710848.3710863)\n- [arxiv'24] [PREBA: A Hardware\u002FSoftware Co-Design for Multi-Instance GPU based AI Inference Servers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19114)\n- [SC'24] [ParvaGPU: Efficient Spatial GPU Sharing for Large-Scale DNN Inference in Cloud Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14447)\n- [arxiv'24] [Tally: Non-Intrusive Performance Isolation for Concurrent Deep Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07381)\n- [ICPP'24] [MIGER: Integrating Multi-Instance GPU and Multi-Process Service for Deep Learning Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673089)\n- [ASPLOS'24] [RAP: Resource-aware Automated GPU Sharing for Multi-GPU Recommendation Model Training and Input Preprocessing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640406)\n- [EuroSys'24] [Orion: Interference-aware, Fine-grained GPU Sharing for ML Applications](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3627703.3629578)\n- [ATC'23] [Beware of Fragmentation: Scheduling GPU-Sharing Workloads with Fragmentation Gradient Descent](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc23\u002Fpresentation\u002Fweng)\n- [NSDI'23] [Transparent GPU Sharing in Container Clouds for Deep Learning Workloads](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi23\u002Fpresentation\u002Fwu)\n- [ICPP'23] FaST-GShare: Enabling Efficient Spatio-Temporal GPU Sharing in Serverless Computing for Deep Learning Inference\n- [arxiv'23] [GACER: Granularity-Aware ConcurrEncy Regulation for Multi-Tenant Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11745)\n- [arxiv'23] MuxFlow: Efficient and Safe GPU Sharing in Large-Scale Production Deep Learning Clusters\n- [SoCC'22] MISO: exploiting multi-instance GPU capability on multi-tenant GPU clusters\n- [PACT'22] GPUPool: A Holistic Approach to Fine-Grained GPU Sharing in the Cloud\n- [ATC'21] Zico: Efficient GPU Memory Sharing for Concurrent DNN Training\n- [MLSys'20] Salus: Fine-Grained GPU Sharing Primitives for Deep Learning Applications\n- [OSDI'20] AntMan: Dynamic Scaling on GPU Clusters for Deep Learning\n- [OSDI'20] PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications\n- [RTAS'19] [Fractional GPUs: Software-Based Compute and Memory Bandwidth Reservation for GPUs](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8743200)\n\n## Compiler\n- [arxiv'26] [Axe: A Simple Unified Layout Abstraction for Machine Learning Compilers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19092)\n- [arxiv'25] [Tawa: Automatic Warp Specialization for Modern GPUs with Asynchronous References](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14719)\n- [arxiv'25] [Dato: A Task-Based Programming Model for Dataflow Accelerators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.06794)\n- [arxiv'25] [Flashlight: PyTorch Compiler Extensions to Accelerate Attention Variants](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02043)\n- [NeurIPS'25] [REASONING COMPILER: LLM-Guided Optimizations for Efficient Model Serving](https:\u002F\u002Fopenreview.net\u002Fpdf?id=2D4TuZyNnr)\n- [SOSP'25] [Mercury: Unlocking Multi-GPU Operator Optimization for LLMs via Remote Memory Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764798)\n- [MICRO'25] [StreamTensor: Make Tensors Stream in Dataflow Accelerators for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13694)\n- [OSDI'25] [PipeThreader: Software-Defined Pipelining for Efficient DNN Execution](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fcheng)\n- [OSDI'25] [QiMeng-Xpiler: Transcompiling Tensor Programs for Deep Learning Systems with a Neural-Symbolic Approach](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fdong)\n- [OSDI'25] [Mirage: A Multi-Level Superoptimizer for Tensor Programs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fwu-mengdi)\n- [OSDI'25] [KPerfIR: Towards a Open and Compiler-centric Ecosystem for GPU Kernel Performance Tooling on Modern AI Workloads](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fguan)\n- [arxiv'25] [TileLang: A Composable Tiled Programming Model for AI Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17577)\n- [arxiv'25] [Hexcute: A Tile-based Programming Language with Automatic Layout and Task-Mapping Synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16214)\n- [arxiv'25] [DeepCompile: A Compiler-Driven Approach to Optimizing Distributed Deep Learning Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09983)\n- [ASPLOS'25] [Mosaic: Exploiting Instruction-Level Parallelism on Deep Learning Accelerators with iTex Tessellation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716262)\n- [ASPLOS'25] [Concerto: Automatic Communication Optimization and Scheduling for Large-Scale Deep Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707223)\n- [arxiv'25] [Hercules: A Compiler for Productive Programming of Heterogeneous Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10855)\n- [CC'25] [LLM Compiler: Foundation Language Models for Compiler Optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3708493.3712691)\n- [CGO'25] [IntelliGen: Instruction-Level Auto-tuning for Tensor Program with Monotonic Memory Optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3696443.3708967)\n- [SOSP'24] [Scaling Deep Learning Computation over the Inter-core Connected Intelligence Processor with T10](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695955)\n- [OSDI'23] Cocktailer: Analyzing and Optimizing Dynamic Control Flow in Deep Learning\n- [OSDI'23] Welder: Scheduling Deep Learning Memory Access via Tile-graph\n- [OSDI'23] Effectively Scheduling Computational Graphs of Deep Neural Networks toward Their Domain-Specific Accelerators\n- [OSDI'23] EINNET: Optimizing Tensor Programs with Derivation-Based Transformations\n- [OSDI'23] [Optimizing Dynamic Neural Networks with Brainstorm](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Fcui)\n- [OSDI'22] [ROLLER: Fast and Efficient Tensor Compilation for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi22\u002Fpresentation\u002Fzhu)\n- [OSDI'20] [Rammer: Enabling Holistic Deep Learning Compiler Optimizations with rTasks](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fma)\n- [OSDI'20] [Ansor: Generating High-Performance Tensor Programs for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fzheng)\n- [ASPLOS'20] [FlexTensor: An Automatic Schedule Exploration and Optimization Framework for Tensor Computation on Heterogeneous System](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3373376.3378508)\n- [OSDI'18] [TVM: An Automated End-to-End Optimizing Compiler for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi18\u002Fpresentation\u002Fchen)\n\n## GPU Kernel Optimization\n- [ASPLOS'26] [Tilus: A Tile-Level GPGPU Programming Language for Low-Precision Computation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3760250.3762219)\n- [EuroSys'26] [Efficient and Adaptable Overlapping for Computation and Communication via Signaling and Reordering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19519)\n- [arxiv'25] [Mirage Persistent Kernel: A Compiler and Runtime for Mega-Kernelizing Tensor Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22219)\n- [arxiv'25] [KernelEvolve: Scaling Agentic Kernel Coding for Heterogeneous AI Accelerators at Meta](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23236)\n- [arxiv'25] [Memory-Efficient Acceleration of Block Low-Rank Foundation Models on Resource Constrained GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20861)\n- [arxiv'25] [FlashFuser: Expanding the Scale of Kernel Fusion for Compute-Intensive Operators via Inter-Core Connection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12949)\n- [arxiv'25] [Flash Multi-Head Feed-Forward Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.06989)\n- [arxiv'25] [Iris: First-Class Multi-GPU Programming Experience in Triton](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.12500)\n- [arxiv'25] [AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15915)\n- [arxiv'25] [ParallelKittens: Systematic and Practical Simplification of Multi-GPU AI Kernels](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13940)\n- [SC'25] [HyTiS: Hybrid Tile Scheduling for GPU GEMM with Enhanced Wave Utilization and Cache Locality](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759771)\n- [SC'25] [UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759894)\n- [arxiv'25] [HipKittens: Fast and Furious AMD Kernels](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08083)\n- [TACO'25] [HuntKTm: Hybrid Scheduling and Automatic Management for Efficient Kernel Execution on Modern GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774652)\n- [NeurIPS'25] [FlashMoE: Fast Distributed MoE in a Single Kernel](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04667)\n- [MLSys'25] [FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving](https:\u002F\u002Fopenreview.net\u002Fforum?id=RXPofAsL8F)\n- [arxiv'25] [LiquidGEMM: Hardware-Efficient W4A8 GEMM Kernel for High-Performance LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.01229)\n- [arxiv'25] [TileLang: A Composable Tiled Programming Model for AI Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17577)\n- [PLDI'25] [Task-Based Tensor Computations on Modern GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07004)\n- [TACO'25] [Kitsune: Enabling Dataflow Execution on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3777466)\n- [ICLR'25] [ThunderKittens: Simple, Fast, and Adorable Kernels](https:\u002F\u002Fopenreview.net\u002Fforum?id=0fJfVOSUra)\n- [PLDI'25] [Task-Based Tensor Computations on Modern GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3729262)\n- [ASPLOS'25] [Composing Distributed Computations Through Task and Kernel Fusion](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707216)\n- [MLSys'25] [FastTree: Optimizing Attention Kernel and Runtime for Tree-Structured LLM Inference](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3278)\n- [arxiv'24] [ACS: Concurrent Kernel Execution on Irregular, Input-Dependent Computational Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12377)\n- [arxiv'24] [Flex Attention: A Programming Model for Generating Optimized Attention Kernels](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05496)\n- [NeurIPS'24] [FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F7ede97c3e082c6df10a8d6103a2eebd2-Abstract-Conference.html)\n- [ICLR'24] [FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning](https:\u002F\u002Fopenreview.net\u002Fforum?id=mZn2Xyh9Ec)\n- [CGO'24] [A Framework for Fine-Grained Synchronization of Dependent GPU Kernels](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10444873)\n- [RTAS'24] [Demystifying NVIDIA GPU Internals to Enable Reliable GPU Management](https:\u002F\u002Fwww.cs.unc.edu\u002F~jbakita\u002Frtas24-private.pdf)\n  - slides: [link](https:\u002F\u002Fwww.cs.unc.edu\u002F~jbakita\u002Frtas24_slides.pdf)\n- [arxiv'23] [Stream-K: Work-centric Parallel Decomposition for Dense Matrix-Matrix Multiplication on the GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.03598)\n- [OSDI'23] [Welder: Scheduling Deep Learning Memory Access via Tile-graph](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Fshi)\n- [arxiv'21] [Characterizing Concurrency Mechanisms for NVIDIA GPUs under Deep Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.00459)\n- [SIGMETRICS'21] [Demystifying the Placement Policies of the NVIDIA GPU Thread Block Scheduler for Concurrent Kernels](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3453953.3453972)\n- [NeurIPS'20] [Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F5f0ad4db43d8723d18169b2e4817a160-Abstract.html)\n- [NeurIPS'22] [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html)\n- [RTSS'17] [GPU Scheduling on the NVIDIA TX2: Hidden Details Revealed](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8277284)\n\n## LLM Long Context\n- [SC'25] [UltraAttn: Efficiently Parallelizing Attention through Hierarchical Context-Tiling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759894)\n- [SC'25] [RingX: Scalable Parallel Attention for Long-Context Learning on HPC](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759859)\n- [arxiv'25] [Optimizing Long-context LLM Serving via Fine-grained Sequence Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06247)\n- [NeurIPS'25] [StarTrail: Concentric Ring Sequence Parallelism for Efficient Near-Infinite-Context Transformer Model Training](https:\u002F\u002Fopenreview.net\u002Fforum?id=PxximqJil4)\n- [arxiv'25] [Long-Context Attention Benchmark: From Kernel Efficiency to Distributed Context Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.17896)\n- [arxiv'25] [Efficient Long-context Language Model Training by Core Attention Disaggregation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18121)\n- [SOSP'25] [DCP: Addressing Input Dynamism In Long-Context Training via Dynamic Context Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764849)\n- [arxiv'25] [Data-Centric Elastic Pipeline Parallelism for Efficient Long-Context LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21275)\n- [arxiv'25] [Strata: Hierarchical Context Caching for Long Context Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18572)\n- [arxiv'25] [TokenLake: A Unified Segment-level Prefix Cache Pool for Fine-grained Elastic Long-Context LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17219v1)\n- [ACL'25] [MiniKV: Pushing the Limits of 2-Bit KV Cache via Compression and System Co-Design for Efficient Long Context Inference](https:\u002F\u002Faclanthology.org\u002F2025.findings-acl.952.pdf)\n- [arxiv'25] [HelixPipe: Efficient Distributed Training of Long Sequence Transformers with Attention Parallel Pipeline Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00394)\n- [arxiv'25] [SALE : Low-bit Estimation for Efficient Sparse Attention in Long-context LLM Prefilling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24179)\n- [arxiv'25] [Training Long-Context LLMs Efficiently via Chunk-wise Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16710)\n- [arxiv'25] [SlimPipe: Memory-Thrifty and Efficient Pipeline Parallelism for Long-Context LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14519)\n- [ASPLOS'25] [FlexSP: Accelerating Large Language Model Training via Flexible Sequence Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715998)\n- [arxiv'25] [XAttention: Block Sparse Attention with Antidiagonal Scoring](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16428)\n- [arxiv'25] [SPPO:Efficient Long-sequence LLM Training via Adaptive Sequence Pipeline Parallel Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10377)\n- [arxiv'25] [ByteScale: Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000 GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.21231)\n- [arxiv'25] [Long-Context Inference with Retrieval-Augmented Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20330)\n- [PODC'25] [System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3662158.3662806)\n- [arxiv'25] [ParallelComp: Parallel Long-Context Compressor for Length Extrapolation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14317)\n- [arxiv'25] [LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14866)\n- [arxiv'25] [MoBA: Mixture of Block Attention for Long-Context LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13189)\n- [arxiv'25] [Tactic: Adaptive Sparse Attention with Clustering and Distribution Fitting for Long-Context LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12216)\n- [arxiv'25] [APB: Accelerating Distributed Long-Context Inference by Passing Compressed Context Blocks across GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12085)\n- [SIGMOD'25] [MEMO: Fine-grained Tensor Management For Ultra-long Context LLM Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3709703)\n- [arxiv'25] [Twilight: Adaptive Attention Sparsity with Hierarchical Top-p Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02770)\n- [arxiv'25] [Adjoint sharding for very long context training of state space models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00692)\n- [arxiv'24] [LoL-PIM: Long-Context LLM Decoding with Scalable DRAM-PIM System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20166)\n- [arxiv'24] [Data-Centric and Heterogeneity-Adaptive Sequence Parallelism for Efficient LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01523)\n- [ICLR'24] [Efficient Streaming Language Models with Attention Sinks](https:\u002F\u002Fopenreview.net\u002Fforum?id=NG7sS51zVF) [[Code](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fstreaming-llm)]\n- [SOSP'24] [LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695948)\n- [arxiv'24] [USP: A Unified Sequence Parallelism Approach for Long Context Generative AI](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07719)\n- [arxiv'24] [Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16978v1)\n- [NeurIPS'24 Workshop] [Long Context RAG Performance of Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03538)\n- [arxiv'24] [ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21465)\n- [arxiv'24] [Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17264)\n- [arxiv'24] [CSPS: A Communication-Efficient Sequence-Parallelism based Serving System for Transformer based Models with Long Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15104)\n- [COLM'24] [TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11912)\n- [arxiv'24] [FocusLLM: Scaling LLM's Context by Parallel Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11745)\n- [Survey :mag:] [IJCAI'24] [X-former Elucidator: Reviving Efficient Attention for Long Context Language Modeling](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2024\u002F904)\n\n## Model Compression\n> For comprehensive list of quantization papers, refer to https:\u002F\u002Fgithub.com\u002FEfficient-ML\u002FAwesome-Model-Quantization.\n\n- [PPoPP'26] [JanusQuant: Accurate and Efficient 2-bit KV Cache Quantization for Long-Context Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786428)\n- [PPoPP'26] [RoMeo: Mitigating Dual-dimensional Outliers with Rotated Mixed Precision Quantization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786419)\n- [PPoPP'26] [High-Throughput Non-uniformly Quantized 3-bit LLM Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786423)\n- [arxiv'26] [Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20088)\n- [arxiv'25] [Bridging the Gap Between Promise and Performance for Microscaling FP4 Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23202)\n- [EMNLP'25] [Scaling Down, Serving Fast: Compressing and Deploying Efficient LLMs for Recommendation Systems](https:\u002F\u002Faclanthology.org\u002F2025.emnlp-industry.119.pdf)\n- [NeurIPS'25] [70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float (DFloat11)](https:\u002F\u002Fopenreview.net\u002Fforum?id=xdNAVP7TGy)\n- [arxiv'25] [MergeMoE: Efficient Compression of MoE Models via Expert Output Merging](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14436)\n- [CLUSTER'25] [SplitQuant: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and Adaptive Quantization](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186491\u002F2aCq16HCtPO)\n- [JMLR'25] [BitNet: 1-bit Pre-training for Large Language Models](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume26\u002F24-2050\u002F24-2050.pdf)\n- [OSDI'25] [DecDEC: A Systems Approach to Advancing Low-Bit LLM Quantization](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fpark-yeonhong)\n- [arxiv'25] [TAH-QUANT: Effective Activation Quantization in Pipeline Parallelism over Slow Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01352)\n- [arxiv'25] [DECA: A Near-Core LLM Decompression Accelerator Supporting Out-of-Order Invocation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19349)\n- [arxiv'25] [ITERA-LLM: Boosting Sub-8-Bit Large Language Model Inference via Iterative Tensor Decomposition](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08981)\n- [ISCA'25] [Transitive Array: An Efficient GEMM Accelerator with Result Reuse](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16339)\n- [arxiv'24] [Accelerating Distributed Deep Learning using Lossless Homomorphic Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07529)\n- [ICML'24] [Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fpark24e.html)\n- [ACL'23] [Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02301)\n- [ICLR'23] [GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.17323)\n- [OSDI'23] [AdaEmbed: Adaptive Embedding for Large-Scale Recommendation Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Flai)\n- [EuroSys'23] Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies\n- [ICML'22] [TSPipe: Learn from Teacher Faster with Pipelines](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Flim22a.html)\n\n## Federated Learning\n- [VLDB'25] [PS-MI: Accurate, E!icient, and Private Data Valuation in Vertical Federated Learning](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol18\u002Fp3559-zhou.pdf)\n- [arxiv'24] [FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11304v1)\n- [MLSys'24] [LIFL: A Lightweight, Event-driven Serverless Platform for Federated Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10968)\n- [arxiv'24] [FedEx: Expediting Federated Learning over Heterogeneous Mobile Devices by Overlapping and Participant Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00943)\n- [KDD'24] [FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17706)\n- [CCGrid'24] [Apodotiko: Enabling Efficient Serverless Federated Learning in Heterogeneous Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14033)\n- [EuroSys'24] [Dordis: Efficient Federated Learning with Dropout-Resilient Differential Privacy](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3629559)\n- [arxiv'24] [Decoupled Vertical Federated Learning for Practical Training on Vertically Partitioned Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03871v1)\n- [SAC'24] [Training Heterogeneous Client Models using Knowledge Distillation in Serverless Federated Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07295)\n- [arxiv'23] [CAFE: Carbon-Aware Federated Learning in Geographically Distributed Data Centers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03615)\n- [arxiv'23] [Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15080)\n- [IMWUT'23] [AttFL: A Personalized Federated Learning Framework for Time-series Mobile and Embedded Sensor Data Processing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3610917)\n- [Survey :mag:] [FGCS'23] [Model aggregation techniques in federated learning: A comprehensive survey](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167739X23003333)\n- [SoCC'23] [Auxo: Heterogeneity-Mitigating Federated Learning via Scalable Client Clustering](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.16656)\n- [MLSys'23] [GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F3ed923f9f88108cb066c6568d3df2666-Abstract-mlsys2023.html)\n- [WWW'23] [To Store or Not? Online Data Selection for Federated Learning with Limited Storage](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.00195)\n- [EuroSys'23] REFL: Resource-Efficient Federated Learning\n- [VLDB'23] [FederatedScope: A Flexible Federated Learning Platform for Heterogeneity](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp1059-li.pdf)\n- [RecSys'22] [Towards Fair Federated Recommendation Learning: Characterizing the Inter-Dependence of System and Data Heterogeneity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02633)\n- [TMLR'22] [Optimal Client Sampling for Federated Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13723)\n- [ICML'22] FedScale: Benchmarking Model and System Performance of Federated Learning at Scale\n- [MobiSys'22] FedBalancer: data and pace control for efficient federated learning on heterogeneous clients\n- [MobiCom'22] PyramidFL: A Fine-grained Client Selection Framework for Efficient Federated Learning\n- [MLSys'22] PAPAYA: Practical, Private, and Scalable Federated Learning\n- [AISTATS'22] Federated Learning with Buffered Asynchronous Aggregation\n- [NeurIPS'21] Federated Reconstruction: Partially Local Federated Learning\n- [NeurIPS'21] FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout\n- [OSDI'21] Oort: Efficient Federated Learning via Guided Participant Selection\n- [MICRO'21] AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning\n- [MLSys'19] Towards Federated Learning at Scale: System Design\n- [Survey :mag:] [ACM CSUR'22] Federated Learning for Smart Healthcare: A Survey\n\n## Privacy-Preserving ML\n- [arxiv'26] [Scaling up Privacy-Preserving ML: A CKKS Implementation of Llama-2-7B](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.18511)\n- [CCS'25] [MoEcho: Exploiting Side-Channel Attacks to Compromise User Privacy in Mixture-of-Experts LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.15036)\n- [USENIX Security'25] [Phantom: Privacy-Preserving Deep Neural Network Model Obfuscation in Heterogeneous TEE and GPU System](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fusenixsecurity25\u002Fsec25cycle1-prepub-1136-bai.pdf)\n- [ASPLOS'24] [LazyDP: Co-Designing Algorithm-Software for Scalable Training of Differentially Private Recommendation Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640384)\n- [NeurIPS'24] [Nimbus: Secure and Efficient Two-Party Inference for Transformers](https:\u002F\u002Fopenreview.net\u002Fforum?id=G7QS68ICPJ)\n- [ACL'24] [SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC](https:\u002F\u002Faclanthology.org\u002F2024.findings-acl.790\u002F)\n- [S&P'24] [BOLT: Privacy-Preserving, Accurate and Efficient Inference for Transformers](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10646705)\n- [DAC'23] Privacy-Preserving DNN Training with Prefetched Meta-Keys on Heterogeneous Neural Network Accelerators\n- [ICLR'23] [MPCFormer: fast, performant and private Transformer inference with MPC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01452)\n- [NeurIPS'22] [Iron: Private Inference on Transformers](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F64e2449d74f84e5b1a5c96ba7b3d308e-Abstract-Conference.html)\n\n## ML APIs & Application-Side Optimization\n- [ASPLOS'25] [Towards End-to-End Optimization of LLM-based Applications with Ayo](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716278)\n- [arxiv'24] [APIServe: Efficient API Support for Large-Language Model Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01869)\n- [OSDI'24] [ChameleonAPI: Automatic and Efficient Customization of Neural Networks for ML Applications](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fliu)\n- [ICML'22] [Efficient Online ML API Selection for Multi-Label Classification Tasks](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fchen22ad.html) (`FrugalMCT`)\n- [NeurIPS'20] [FrugalML: How to use ML Prediction APIs more accurately and cheaply](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F789ba2ae4d335e8a2ad283a3f7effced-Abstract.html)\n\n## ML for Systems\n- [arxiv'25] [AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15915)\n- [arxiv'25] [ASAP: an Agentic Solution to Auto-optimize Performance of Large-Scale LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03844)\n- [NeurIPS'25] [REASONING COMPILER: LLM-Guided Optimizations for Efficient Model Serving](https:\u002F\u002Fopenreview.net\u002Fpdf?id=2D4TuZyNnr)\n- [arxiv'25] [Barbarians at the Gate: How AI is Upending Systems Research](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.06189) [[Code](https:\u002F\u002Fgithub.com\u002FUCB-ADRS\u002FADRS)]\n- [arxiv'25] [SuperCoder: Assembly Program Superoptimization with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11480)\n- [HotOS'25] [How I learned to stop worrying and love learned OS policies](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3713082.3730384)\n- [VLDB'25] [E2ETune: End-to-End Knob Tuning via Fine-tuned Generative Language Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11581)\n- [SenSys'25] [CheckMate: LLM-Powered Approximate Intermittent Computing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3715014.3722056)\n- [ICSE'25] [Large Language Models as Configuration Validators](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficse\u002F2025\u002F056900a204\u002F215aWCaXlSg)\n- [NeurIPS'24] [IaC-Eval: A code generation benchmark for Infrastructure-as-Code programs](https:\u002F\u002Fwww.cs-pk.com\u002Fpreprint-iac-eval.pdf)\n- [arxiv'24] [Cloud Atlas: Efficient Fault Localization for Cloud Systems using Language Models and Causal Insight](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08694)\n- [arxiv'24] [LLMTune: Accelerate Database Knob Tuning with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11581)\n- [SIGCOMM'24] [NetLLM: Adapting Large Language Models for Networking](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672268)\n- [arxiv'24] [LLM-Enhanced Data Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02643)\n- [arxiv'24] [MPIrigen: MPI Code Generation through Domain-Specific Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09126)\n- [arxiv'24] [Can Large Language Models Write Parallel Code?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12554)\n- [arxiv'23] [LLM-Assisted Code Cleaning For Training Accurate Code Generators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.14904)\n- [arxiv'23] [Large Language Models for Compiler Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07062)\n- [VLDB'23] [How Large Language Models Will Disrupt Data Management](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp3302-fernandez.pdf)\n\n## Energy Efficiency\n- [arxiv'26] [Where Do the Joules Go? Diagnosing Inference Energy Consumption](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22076)\n- [arxiv'26] [Kareus: Joint Reduction of Dynamic and Static Energy in Large Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17654)\n- [arxiv'26] [GreenServ: Energy-Efficient Context-Aware Dynamic Routing for Multi-Model LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17551)\n- [NeurIPS'25] [CATransformers: Carbon Aware Transformers Through Joint Model-Hardware Optimization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=IjMZfMVyLF)\n- [MICRO'25] [SuperMesh: Energy-Efficient Collective Communications for Accelerators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756085)\n- [MICRO'25] [Characterizing the Efficiency of Distributed Training: A Power, Performance, and Thermal Perspective](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.10371)\n- [arxiv'25] [VoltanaLLM: Feedback-Driven Frequency Control and State-Space Routing for Energy-Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04827)\n- [arxiv'25] [GreenLLM: SLO-Aware Dynamic Frequency Scaling for Energy-Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.16449)\n- [arxiv'25] [Power Stabilization for AI Training Datacenters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.14318)\n- [arxiv'25] [The ML.ENERGY Benchmark: Toward Automated Inference Energy Measurement and Optimization](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.06371)\n- [arxiv'25] [EcoServe: Enabling Cost-effective LLM Serving with Proactive Intra- and Inter-Instance Orchestration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18154)\n- [NSDI'25] [GREEN: Carbon-efficient Resource Scheduling for Machine Learning Clusters](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fxu-kaiqiang)\n- [HPCA'25] throttLL'eM: Predictive GPU Throttling for Energy Efficient LLM Inference Serving\n- [arxiv'25] [EcoServe: Designing Carbon-Aware AI Inference Systems](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.05043)\n- [arxiv'25] [Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01671)\n- [arxiv'24] [GreenLLM: Disaggregating Large Language Model Serving on Heterogeneous GPUs for Lower Carbon Emissions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20322)\n- [arxiv'24] [EaCO: Resource Sharing Dynamics and Its Impact on Energy Efficiency for DNN Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08294)\n- [arxiv'24] [DynamoLLM: Designing LLM Inference Clusters for Performance and Energy Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00741)\n- [SOSP'24] [Perseus: Removing Energy Bloat from Large Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.06902)\n- [arxiv'23] [CAFE: Carbon-Aware Federated Learning in Geographically Distributed Data Centers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03615)\n- [ATC'23] EnvPipe: Performance-preserving DNN Training Framework for Saving Energy\n- [NSDI'23] Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training\n\n## Retrieval-Augmented Generation (RAG)\n- [ICDE'25] [SAGE: A Framework of Precise Retrieval for RAG](https:\u002F\u002Fdbgroup.cs.tsinghua.edu.cn\u002Fligl\u002Fpapers\u002FICDE25-SAGE.pdf)\n- [SOSP'25] [HedraRAG: Co-Optimizing Generation and Retrieval for Heterogeneous RAG Workflows](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764806)\n- [ISCA'25] [HeterRAG: Heterogeneous Processing-in-Memory Acceleration for Retrieval-augmented Generation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731089)\n- [arxiv'25] [Patchwork: A Unified Framework for RAG Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07833)\n- [arxiv'25] [Accelerating Retrieval-Augmented Language Model Serving with Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14021)\n- [arxiv'25] [RAGO: Systematic Performance Optimization for Retrieval-Augmented Generation Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14649)\n- [arxiv'25] [Long-Context Inference with Retrieval-Augmented Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20330)\n- [VLDB'25] [Chameleon: a heterogeneous and disaggregated accelerator system for retrieval-augmented language models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09949)\n- [arxiv'24] [Towards Understanding Systems Trade-offs in Retrieval-Augmented Generation Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11854)\n- [arxiv'24] [RAGServe: Fast Quality-Aware RAG Systems with Configuration Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10543v1)\n- [arxiv'24] [Dehallucinating Parallel Context Extension for Retrieval-Augmented Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14905)\n- [arxiv'24] [Accelerating Retrieval-Augmented Language Model Serving with Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14021)\n- [NeurIPS'24 Workshop] [Long Context RAG Performance of Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03538)\n\n## Simulation\n- [arxiv'26] [SynPerf: A Hybrid Analytical-ML Framework for GPU Performance Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14910)\n- [arxiv'26] [Revati: Transparent GPU-Free Time-Warp Emulation for LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00397)\n- [arxiv'25] [Scalable Synthesis of distributed LLM workloads through Symbolic Tensor Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10480)\n- [MICRO'25] [PyTorchSim: A Comprehensive, Fast, and Accurate NPU Simulation Framework](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756045)\n- [MICRO'25] [Swift and Trustworthy Large-Scale GPU Simulation with Fine-Grained Error Modeling and Hierarchical Clustering](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3757107)\n- [arxiv'25] [Frontier: Simulating the Next Generation of LLM Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03148v1)\n- [NAIC @ SIGCOMM'25] [MLSynth: Towards Synthetic ML Traces](https:\u002F\u002Faliireza.github.io\u002Ffiles\u002Fmlsynth-naic25.pdf)\n- [NAIC @ SIGCOMM'25] [Simulating LLM training workloads for heterogeneous compute and network infrastructure](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3748273.3749212)\n- [arxiv'25] [Frontier: Simulating the Next Generation of LLM Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03148)\n- [arxiv'25] [Maya: Optimizing Deep Learning Training Workloads using Emulated Virtual Accelerators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20191)\n- [NSDI'25] [Accelerating Design Space Exploration for LLM Training Systems with Multi-experiment Parallel Simulation](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fgui)\n- [ASPLOS'25] [Forecasting GPU Performance for Deep Learning Training and Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707265)\n- [MLSys'24] [Vidur: A Large-Scale Simulation Framework For LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05465)\n\n## Systems for Agentic AI\n- [arxiv'26] [LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.01053)\n- [arxiv'26] [VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.16973)\n- [arxiv'26] [ToolCaching: Towards Efficient Caching for LLM Tool-calling](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2601.15335)\n- [arxiv'26] [Toward Efficient Agents: Memory, Tool learning, and Planning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14192)\n- [arxiv'26] [Sutradhara: An Intelligent Orchestrator-Engine Co-design for Tool-based Agentic Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12967)\n- [arxiv'26] [Beyond Max Tokens: Stealthy Resource Amplification via Tool Calling Chains in LLM Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10955)\n- [arxiv'26] [XGrammar 2: Dynamic and Efficient Structured Generation Engine for Agentic LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04426)\n- [arxiv'26] [Nalar: An agent serving framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05109)\n- [arxiv'26] [Software-Defined Agentic Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03197)\n- [NSDI'26] [Agentix: An Efficient Serving Engine for LLM Agents as General Programs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Fluo)\n- [arxiv'25] [ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21689)\n- [arxiv'25] [Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20848)\n- [arxiv'25] [Towards Efficient Agents: A Co-Design of Inference Architecture and System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18337)\n- [arxiv'25] [Beyond Training: Enabling Self-Evolution of Agents with MOBIMEM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.15784)\n- [arxiv'25] [Optimizing Agentic Language Model Inference via Speculative Tool Calls](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.15834)\n- [arxiv'25] [Astraea: A State-Aware Scheduling Engine for LLM-Powered Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14142)\n- [arxiv'25] [Measuring Agents in Production](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04123)\n- [arxiv'25] [Matrix: Peer-to-Peer Multi-Agent Synthetic Data Generation Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21686)\n- [arxiv'25] [Aragog: Just-in-Time Model Routing for Scalable Serving of Agentic Workflows](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20975)\n- [arxiv'25] [AccelOpt: A Self-Improving LLM Agentic System for AI Accelerator Kernel Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15915)\n- [ML for Systems @ NeurIPS'25] [Agentic Bridge Framework: Closing the Gap Between Agentic Capability and Performance Benchmarks](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Rv664iOMNv)\n- [arxiv'25] [Continuum: Efficient and Robust Multi-Turn LLM Agent Scheduling with KV Cache Time-to-Live](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02230)\n- [arxiv'25] [Sherlock: Reliable and Efficient Agentic Workflow Execution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00330)\n- [arxiv'25] [A CPU-Centric Perspective on Agentic AI](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00739)\n- [SAA'25] Useful Agentic AI: A Systems Outlook\n- [SAA'25] Toward Systems Foundations for Agentic Exploratio\n- [SAA'25] Supporting Our AI Overlords: Redesigning Data Systems to be Agent-First\n- [SAA'25] Cortex: Workflow-Aware Resource Pooling and Scheduling for Agentic Serving\n- [SAA'25] Tetris: Efficient and Predictive KV Cache Offloading for Agentic and Reasoning Workloads\n- [SAA'25] GPU Memory Prediction for Multimodal Model Training\n- [SAA'25] DMAS-Forge: A Framework for Transparent Deployment of AI Applications as Distributed Systems\n- [SAA'25] Automated Annotation Inference for MCP-based Agents\n- [SAA'25] EARL: Efficient Agentic Reinforcement Learning Systems for Large Language Models\n- [SAA'25] Unified Agentic Interfaces is All You Need for AI Agent Observability\n- [arxiv'25] [Flash-Searcher: Fast and Effective Web Agents via DAG-Based Parallel Execution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25301)\n- [arxiv'25] [MobiAgent: A Systematic Framework for Customizable Mobile Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00531)\n- [ICML'25] [The Berkeley Function Calling Leaderboard (BFCL): From Tool Use to Agentic Evaluation of Large Language Models](https:\u002F\u002Fopenreview.net\u002Fforum?id=2GmDdhBdDk)\n- [SIGCOMM'25] [Intent-Driven Network Management with Multi-Agent LLMs: The Confucius Framework](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3718958.3750537)\n- [arxiv'25] [rStar2-Agent: Agentic Reasoning Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20722)\n- [COLM'25] [R2E-Gym: Procedural Environments and Hybrid Verifiers for Scaling Open-Weights SWE Agents](https:\u002F\u002Fopenreview.net\u002Fpdf?id=7evvwwdo3z)\n- [arxiv'25] [Efficient and Scalable Agentic AI with Heterogeneous Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.19635)\n- [arxiv'25] [Agent.xpu: Efficient Scheduling of Agentic LLM Workloads on Heterogeneous SoC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24045)\n- [arxiv'25] [GSO: Challenging Software Optimization Tasks for Evaluating SWE-Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23671)\n- [ASPLOS'25] [ReCA: Integrated Acceleration for Real-Time and Efficient Cooperative Embodied Autonomous Agents](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716016)\n- [arxiv'25] [The Danger of Overthinking: Examining the Reasoning-Action Dilemma in Agentic Tasks](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.08235)\n- [arxiv'24] [AI Metropolis: Scaling Large Language Model-based Multi-Agent Simulation with Out-of-order Execution](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03519)\n- [ICML'24] [AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fdu24h.html)\n\n\n## RL Post-Training\n- [ICLR'26] [Revisiting Parameter Server in LLM Post-Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19362)\n- [arxiv'26] [Jet-RL: Enabling On-Policy FP8 Reinforcement Learning with Unified Training and Rollout Precision Flow](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14243)\n- [arxiv'26] [Unleashing Efficient Asynchronous RL Post-Training via Staleness-Constrained Rollout Coordination](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12784)\n- [arxiv'26] [OrchestrRL: Dynamic Compute and Network Orchestration for Disaggregated RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01209)\n- [arxiv'25] [HetRL: Efficient Reinforcement Learning for LLMs in Heterogeneous Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12476)\n- [arxiv'25] [ThreadWeaver: Adaptive Threading for Efficient Parallel Reasoning in Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.07843)\n- [arxiv'25] [RLHFSpec: Breaking the Efficiency Bottleneck in RLHF Training via Adaptive Drafting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04752)\n- [arxiv'25] [Fast LLM Post-training via Decoupled and Best-of-N Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16193)\n- [arxiv'25] [Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16665)\n- [arxiv'25] [Beat the long tail: Distribution-Aware Speculative Decoding for RL Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13841)\n- [arxiv'25] [WeChat-YATT: A Scalable, Simple, Efficient, and Production Ready Training Library](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.07970)\n- [arxiv'25] [The Path Not Taken: RLVR Provably Learns Off the Principals](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08567)\n- [arxiv'25] [AReaL-Hex: Accommodating Asynchronous RL Training over Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00796)\n- [NeurIPS'25] [Greedy Sampling Is Provably Efficient for RLHF](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24700)\n- [arxiv'25] [Ask a Strong LLM Judge when Your Reward Model is Uncertain](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20369)\n- [arxiv'25] [RLBoost: Harvesting Preemptible Resources for Cost-Efficient Reinforcement Learning on LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19225)\n- [arxiv'25] [Laminar: A Scalable Asynchronous RL Post-Training Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12633)\n- [arxiv'25] [The Art of Scaling Reinforcement Learning Compute for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.13786)\n- [arxiv'25] [xRouter: Training Cost-Aware LLMs Orchestration System via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08439)\n- [arxiv'25] [Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.07242)\n- [arxiv'25] [Learning from Failures: Understanding LLM Alignment through Failure-Aware Inverse RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.06092)\n- [arxiv'25] [Spurious Rewards: Rethinking Training Signals in RLVR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10947)\n- [arxiv'25] [Quagmires in SFT-RL Post-Training: When High SFT Scores Mislead and What to Use Instead](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01624)\n- [arxiv'25] [RL in the Wild: Characterizing RLVR Training in LLM Deployment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25279)\n- [arxiv'25] [APRIL: Active Partial Rollouts in Reinforcement Learning to tame long-tail generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18521v1)\n- [NeurIPS'25] [AReaL: Asynchronous Reinforcement Learning for Efficient and Scalable Language Reasoning](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2025\u002Fposter\u002F117538)\n- [arxiv'25] [ToRL: Scaling Tool-Integrated RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.23383)\n- [arxiv'25] [VerlTool: Towards Holistic Agentic Reinforcement Learning with Tool Use](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.01055v1)\n- [arxiv'25] [Parallel-R1: Towards Parallel Thinking via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07980)\n- [Survey :mag:] [arxiv'25] [A Survey of Reinforcement Learning for Large Reasoning Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08827)\n- [arxiv'25] [RewardDance: Reward Scaling in Visual Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08826)\n- [arxiv'25] [floq: Training Critics via Flow-Matching for Scaling Compute in Value-Based RL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.06863)\n- [arxiv'25] [ParaThinker: Native Parallel Thinking as a New Paradigm to Scale LLM Test-time Compute](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04475)\n- [arxiv'25] [History Rhymes: Accelerating LLM Reinforcement Learning with RhymeRL](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18588)\n- [COLM'25] [Sample Efficient Preference Alignment in LLMs via Active Exploration](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Vi5cIfIslX)\n- [COLM'25] [Synthetic Data Generation & Multi-Step RL for Reasoning & Tool Use](https:\u002F\u002Fopenreview.net\u002Fpdf?id=oN9STRYQVa)\n- [arxiv'25] [SeamlessFlow: A Trainer Agent Isolation RL Framework Achieving Bubble-Free Pipelines via Tag Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11553)\n- [arxiv'25] [SPECS: Faster Test-Time Scaling through Speculative Drafts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15733)\n- [arxiv'25] [Balanced Actor Initialization: Stable RLHF Training of Distillation-Based Reasoning Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00309)\n- [COLM'25] [Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.15507)\n- [arxiv'25] [ReTool: Reinforcement Learning for Strategic Tool Use in LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11536)\n- [IPDPS'25] [FlexRLHF: A Flexible Placement and Parallelism Framework for Efficient RLHF Training](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F11078517)\n- [arxiv'25] [GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.19457)\n- [ACL'25] [RLKGF: Reinforcement Learning from Knowledge Graph Feedback Without Human Annotations](https:\u002F\u002Faclanthology.org\u002F2025.findings-acl.344.pdf)\n- [arxiv'25] [Multi-module GRPO: Composing Policy Gradients and Prompt Optimization for Language Model Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04660)\n- [arxiv'25] [Scaling RL to Long Videos](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07966)\n- [arxiv'25] [Test-Time Training Done Right](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23884)\n- [arxiv'25] [LlamaRL: A Distributed Asynchronous Reinforcement Learning Framework for Efficient Large-scale LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24034)\n- [arxiv'25] [On-Policy RL with Optimal Reward Baseline](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23585)\n- [arxiv'25] [StreamRL: Scalable, Heterogeneous, and Elastic RL for LLMs with Disaggregated Stream Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15930)\n- [arxiv'25] [DAPO: An Open-Source LLM Reinforcement Learning System at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14476)\n- [MLSys'25] [ReaL: Efficient RLHF Training of Large Language Models with Parameter Reallocation](https:\u002F\u002Fopenreview.net\u002Fforum?id=yLU1zRf95d)\n- [arxiv'25] [Reward Reasoning Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14674)\n- [arxiv'24] [Optimizing RLHF Training for Large Language Models with Stage Fusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13221)\n\n## Multimodal\nhttps:\u002F\u002Fgithub.com\u002Ffriedrichor\u002FAwesome-Multimodal-Papers\n\n- [arxiv'26] [vLLM-Omni: Fully Disaggregated Serving for Any-to-Any Multimodal Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.02204)\n- [arxiv'26] [VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.16973)\n- [arxiv'26] [EPD-Serve: A Flexible Multimodal EPD Disaggregation Inference Serving System On Ascend](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.11590)\n- [ASPLOS'26] [Dynamic Sparsity in Large-Scale Video DiT Training](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3760250.3762216)\n- [arxiv'25] [Cornserve: Efficiently Serving Any-to-Any Multimodal Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14098)\n- [arxiv'25] [FoundationMotion: Auto-Labeling and Reasoning about Spatial Movement in Videos](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10927)\n- [arxiv'25] [MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15690)\n- [SoCC'25] [ModServe: Modality- and Stage-Aware Resource Disaggregation for Scalable Multimodal Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00937)\n- [arxiv'25] [FlowMM: Cross-Modal Information Flow Guided KV Cache Merging for Efficient Multimodal Context Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.05534)\n- [arxiv'25] [OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15870)\n- [arxiv'25] [Fast-dLLM v2: Efficient Block-Diffusion LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26328)\n- [arxiv'25] [Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22618)\n- [arxiv'25] [Mordal: Automated Pretrained Model Selection for Vision Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00241)\n- [arxiv'25] [Dimple: Discrete Diffusion Multimodal Large Language Model with Parallel Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16990)\n- [arxiv'24] [LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15188)\n- [Survey :mag:] [arxiv'24] [A Survey of Resource-efficient LLM and Multimodal Foundation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08092)\n\n## Hybrid LLMs\n- [MICRO'25] [HLX: A Unified Pipelined Architecture for Optimized Performance of Hybrid Transformer-Mamba Language Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756115)\n- [MLSys'25] [Marconi: Prefix Caching for the Era of Hybrid LLMs](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3260)\n\n## Others\n- [arxiv'26] [Long-term Monitoring of Kernel and Hardware Events to Understand Latency Variance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10572)\n- [ASPLOS'26] [cuJSON: A Highly Parallel JSON Parser for GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3760250.3762222)\n- [arxiv'25] [Cyclotron: Compilation of Recurrences to Distributed and Systolic Architectures](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.09987)\n- [arxiv'25] [Streaming Tensor Program: A streaming abstraction for dynamic parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07776)\n- [arxiv'25] [OckBench: Measuring the Efficiency of LLM Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.05722)\n- [SC workshop'25] [Roofline Analysis of Tightly-Coupled CPU-GPU Superchips: A Study on MI300A and GH200](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767497)\n- [NeurIPS'25] [Spark Transformer: Reactivating Sparsity in FFN and Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06644)\n- [MICRO'25] [ORCHES: Orchestrated Test-Time-Compute-based LLM Reasoning on Collaborative GPU-PIM HEterogeneous System](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756039)\n- [arxiv'25] [vAttention: Verified Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05688)\n- [USENIX ;login:] [Wafer-Scale AI Compute: A System Software Perspective](https:\u002F\u002Fwww.usenix.org\u002Fpublications\u002Floginonline\u002Fwafer-scale-ai-compute-system-software-perspective)\n- [arxiv'25] [Training Large Language Models To Reason In Parallel With Global Forking Tokens](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05132)\n- [arxiv'25] [How to Train Your Advisor: Steering Black-Box LLMs with Advisor Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02453)\n- [arxiv'25] [Slm-mux: Orchestrating small language models for reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05077)\n- [arxiv'25] [Hybrid Architectures for Language Models: Systematic Analysis and Design Insights](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04800)\n- [arxiv'25] [Less is More: Recursive Reasoning with Tiny Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04871v1)\n- [arxiv'25] [ThinKV: Thought-Adaptive KV Cache Compression for Efficient Reasoning Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01290)\n- [arxiv'25] [Rethinking Thinking Tokens: LLMs as Improvement Operators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01123)\n- [arxiv'25] [Generalized Parallel Scaling with Interdependent Generations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01143)\n- [arxiv'25] [Composer: A Search Framework for Hybrid Neural Architecture Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00379)\n- [arxiv'25] [dParallel: Learnable Parallel Decoding for dLLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26488)\n- [NeurIPS'25] [Speculate Deep and Accurate: Lossless and Training-Free Acceleration for Offloaded LLMs via Substitute Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18344)\n- [arxiv'25] [AI Factories: It's time to rethink the Cloud-HPC divide](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12849)\n- [arxiv'25] [Efficient Training-Free Online Routing for High-Volume Multi-LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02718)\n- [arxiv'25] [SharedRep-RLHF: A Shared Representation Approach to RLHF with Diverse Preferences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.03672)\n- [arxiv'25] [Learning to Refine: Self-Refinement of Parallel Reasoning in LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00084)\n- [arxiv'25] [LLaVA-Critic-R1: Your Critic Model is Secretly a Strong Policy Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00676)\n- [arxiv'25] [DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research Synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20033)\n- [VLDB'25] [Powerful GPUs or Fast Interconnects: Analyzing Relational Workloads on Modern GPUs](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol18\u002Fp4350-kabic.pdf)\n- [arxiv'25] [Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.07101)\n- [arxiv'25] [Difficulty-Based Preference Data Selection by DPO Implicit Reward Gap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04149)\n- [arxiv'25] [LobRA: Multi-tenant Fine-tuning over Heterogeneous Data](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol18\u002Fp2616-fu.pdf)\n- [arxiv'25] [Copilot Arena: A Platform for Code LLM Evaluation in the Wild](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09328)\n- [arxiv'25] [ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10069)\n- [MICRO'25] [Pimba: A Processing-in-Memory Acceleration for Post-Transformer Large Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10178)\n- [CFAgentic @ ICML'25] [LLMSELECTOR: Learning to Select Models in Compound AI Systems](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NphowWHYJj)\n- [arxiv'25] [Libra: Synergizing CUDA and Tensor Cores for High-Performance Sparse Matrix Multiplication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22714)\n- [arxiv'25] [Prompt-to-Leaderboard: Prompt-Adaptive LLM Evaluations](https:\u002F\u002Fopenreview.net\u002Fforum?id=7VPRrzFEN8) [[Code](https:\u002F\u002Fgithub.com\u002Flmarena\u002Fp2l)]\n- [ISCA'25] [Meta’s Second Generation AI Chip: Model-Chip Co-Design and Productionization Experiences](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731409)\n- [ISCA'25] [Debunking the CUDA Myth Towards GPU-based AI Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731050)\n- [ISCA'25] [UGPU: Dynamically Constructing Unbalanced GPUs for Enhanced Resource Efficiency](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3731103)\n- [arxiv'25] [SeerAttention-R: Sparse Attention Adaptation for Long Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08889)\n- [arxiv'25] [Reinforcement Pre-Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08007)\n- [arxiv'25] [MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22101)\n- [NSDI'25] [Optimizing RLHF Training for Large Language Models with Stage Fusion](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fzhong)\n- [arxiv'25] [Thinking Short and Right Over Thinking Long: Serving LLM Reasoning Efficiently and Accurately](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13326)\n- [arxiv'25] [Faster Video Diffusion with Trainable Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13389)\n- [arxiv'25] [SSR: Speculative Parallel Scaling Reasoning in Test-time](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15340)\n- [arxiv'25] [Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15431)\n- [arxiv'25] [Think Only When You Need with Large Hybrid-Reasoning Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14631)\n- [MLSys'25] [Optimizing LLM Queries in Relational Data Analytics Workloads](https:\u002F\u002Fopenreview.net\u002Fforum?id=R7bK9yycHp)\n- [arxiv'25] [Putting the Value Back in RL: Better Test-Time Scaling by Unifying LLM Reasoners With Verifiers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04842)\n- [arxiv'25] [Understanding the Performance Horizon of the Latest ML Workloads with NonGEMM Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11788)\n- [arxiv'25] [Process Reward Models That Think](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16828)\n- [arxiv'25] [Seed-Thinking-v1.5: Advancing Superb Reasoning Models with Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13914)\n- [arxiv'25] [Sleep-time Compute: Beyond Inference Scaling at Test-time](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13171)\n- [arxiv'25] [SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07891)\n- [arxiv'25] [Scaling Laws for Native Multimodal Models Scaling Laws for Native Multimodal Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07951)\n- [arxiv'25] [OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07096)\n- [arxiv'25] [NotebookOS: A Notebook Operating System for Interactive Training with On-Demand GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20591)\n- [arxiv'25] [Alchemist: Towards the Design of Efficient Online Continual Learning System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01066)\n- [arxiv'25] [Linear Attention for Efficient Bidirectional Sequence Modeling](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.16249)\n- [arxiv'25] [S*: Test Time Scaling for Code Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14382)\n- [arxiv'25] [Optimizing Model Selection for Compound AI Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14815)\n- [arxiv'25] [Copilot Arena: A Platform for Code LLM Evaluation in the Wild](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09328)\n- [arxiv'25] [Efficient-vDiT: Efficient Video Diffusion Transformers with Attention Tile](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06155)\n- [arxiv'25] [BARE: Combining Base and Instruction-Tuned Language Models for Better Synthetic Data Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01697)\n- [arxiv'25] [Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01776)\n- [arxiv'25] [Adaptive Semantic Prompt Caching with VectorQ](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.03771)\n- [EuroSys'25] [HybridFlow: A Flexible and Efficient RLHF Framework](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3689031.3696075)\n- [arxiv'25] [Measuring GPU utilization one level deeper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16909)\n- [ASPLOS'25] [PipeLLM: Fast and Confidential Large Language Model Services with Speculative Pipelined Encryption](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3669940.3707224)\n- [arxiv'24] [Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16737)\n- [arxiv'24] [Debunking the CUDA Myth Towards GPU-based AI Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00210)\n- [arxiv'24] [XGrammar: Flexible and Efficient Structured Generation Engine for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15100)\n- [CPAL'24 (PMLR)] [Jaxpruner: A Concise Library for Sparsity Research](https:\u002F\u002Fproceedings.mlr.press\u002Fv234\u002Flee24a.html)\n- [arxiv'24] [Scorch: A Library for Sparse Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16883)\n- [arxiv'24] [Drowning in Documents: Consequences of Scaling Reranker Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11767)\n- [arxiv'24] [Crafting Interpretable Embeddings for Language Neuroscience by Asking LLMs Questions](https:\u002F\u002Fopenreview.net\u002Fforum?id=mxMvWwyBWe)\n- [arxiv'24] [Computational Bottlenecks of Training Small-scale Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19456)\n- [Survey :mag:] [arxiv'24] [A Comprehensive Survey of Small Language Models in the Era of Large Language Models: Techniques, Enhancements, Applications, Collaboration with LLMs, and Trustworthiness](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03350)\n- [NeurIPS'24] [Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02419)\n- [arxiv'24] [Stochastic Monkeys at Play: Random Augmentations Cheaply Break LLM Safety Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02785)\n- [arxiv'24] [DroidSpeak: Enhancing Cross-LLM Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02820v1)\n- [arxiv'24] [Disaggregating Embedding Recommendation Systems with FlexEMR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12794)\n- [arxiv'24] [JudgeBench: A Benchmark for Evaluating LLM-based Judges](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12784)\n- [arxiv'24] [You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.17258)\n- [arxiv'24] [Computing in the Era of Large Generative Models: From Cloud-Native to AI-Native](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12230)\n- [ATC'24] [Centimani: Enabling Fast AI Accelerator Selection for DNN Training with a Novel Performance Predictor](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fxie)\n- [arxiv'23] [Efficiently Programming Large Language Models using SGLang](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07104)\n- [MICRO'23] [Path Forward Beyond Simulators: Fast and Accurate GPU Execution Time Prediction for DNN Workloads](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3613424.3614277)\n- [arxiv'23] [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18290)\n- [arxiv'22] [Training language models to follow instructions with human feedback](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02155)\n\n\n# References\nThis repository is motivated by:\n- https:\u002F\u002Fgithub.com\u002FHuaizhengZhang\u002FAwesome-System-for-Machine-Learning\n- https:\u002F\u002Fgithub.com\u002FS-Lab-System-Group\u002FAwesome-DL-Scheduling-Papers\n- https:\u002F\u002Fgithub.com\u002Fganler\u002FResearchReading\n- https:\u002F\u002Fjeongseob.github.io\u002Freadings_mlsys.html\n- https:\u002F\u002Fgithub.com\u002Fchwan1016\u002Fawesome-gnn-systems\n- https:\u002F\u002Fgithub.com\u002FConnollyLeon\u002Fawesome-Auto-Parallelism\n","# 机器学习系统论文列表\n\n![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)\n[![欢迎提交PR](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Fbyungsoo-oh\u002Fml-systems-papers\u002Fpulls)\n\n涵盖机器学习系统广泛主题的论文列表  \n> 注：综述类论文以 [Survey 🔍] 前缀标注。\n\n## 目录\n\u003C!-- TOC -->\n\n- [机器学习系统论文列表](#paper-list-for-machine-learning-systems)\n  - [目录](#table-of-contents)\n  - [数据处理](#data-processing)\n    - [数据流水线优化](#data-pipeline-optimization)\n    - [用于机器学习训练的缓存与分布式存储](#caching-and-distributed-storage-for-ml-training)\n    - [大模型数据平面](#llm-data-plane)\n    - [其他](#others)\n  - [训练系统](#training-system)\n    - [GPU集群上的机器学习作业分析](#ml-job-analysis-on-gpu-clusters)\n    - [资源调度](#resource-scheduling)\n    - [分布式训练](#distributed-training)\n    - [AutoML](#automl)\n    - [GNN训练系统](#gnn-training-system)\n  - [推理系统](#inference-system)\n  - [注意力机制优化](#attention-optimization)\n  - [专家混合（MoE）](#mixture-of-experts-moe)\n  - [分布式机器学习的通信优化与网络基础设施](#communication-optimization--network-infrastructure-for-distributed-ml)\n  - [容错与拖尾任务缓解](#fault-tolerance--straggler-mitigation)\n  - [GPU显存管理与优化](#gpu-memory-management--optimization)\n  - [GPU共享](#gpu-sharing)\n  - [编译器](#compiler)\n  - [GPU内核优化](#gpu-kernel-optimization)\n  - [大模型长上下文](#llm-long-context)\n  - [模型压缩](#model-compression)\n  - [联邦学习](#federated-learning)\n  - [隐私保护的机器学习](#privacy-preserving-ml)\n  - [机器学习API与应用端优化](#ml-apis--application-side-optimization)\n  - [面向系统的机器学习](#ml-for-systems)\n  - [能源效率](#energy-efficiency)\n  - [检索增强生成（RAG）](#retrieval-augmented-generation-rag)\n  - [仿真](#simulation)\n  - [面向智能体AI的系统](#systems-for-agentic-ai)\n  - [强化学习后训练](#rl-post-training)\n  - [多模态](#multimodal)\n  - [混合大模型](#hybrid-llms)\n  - [其他](#others-1)\n- [参考文献](#references)\n\n\u003C!-- \u002FTOC -->\n\n## 数据处理\n\n### 数据流水线优化\n**概述**\n- [arxiv'25] [可扩展且高性能的数据加载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20067)\n- [arxiv'25] [OVERLORD：多源大型基础模型训练中DataLoader的终极扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09844)\n- [arxiv'25] [用于高效、容错的异构执行的流式批处理模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12407)\n- [arxiv'25] [多租户智能网卡上的推荐系统网络内预处理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12032)\n- [VLDB'25] [cedar：可组合且优化的机器学习输入数据流水线](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08895)\n- [HotInfra'24] [Lotus：刻画机器学习流水线中的架构级CPU预处理](https:\u002F\u002Fkexinrong.github.io\u002Flab\u002Ffiles\u002Flotus-hotinfra24.pdf)\n- [arxiv'24] [TensorSocket：深度学习训练中的共享数据加载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.18749)\n- [arxiv'24] [ML流水线中高效的表格型数据预处理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14912)\n- [MLSys'22] Plumber：诊断并消除机器学习数据流水线中的性能瓶颈\n- [ISCA'22] 大规模深度推荐模型训练中的数据存储与摄取理解\n- [SIGMOD'22] 我的训练瓶颈在哪里？深度学习预处理流水线中的隐藏权衡\n- [VLDB'21] 分析并缓解DNN训练中的数据停滞\n- [VLDB'21] tf.data：一个机器学习数据处理框架\n\n**预处理停滞**\n- [arxiv'24] [PREBA：基于多实例GPU的AI推理服务器的软硬件协同设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19114)\n- [ATC'24] [Pecan：通过自动变换排序与混合放置实现成本效益高的ML数据预处理](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fgraur)\n- [HotStorage'24] [一种减少DL训练中数据流量的选择性预处理卸载框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3655038.3665947)\n- [VLDB'24] [FusionFlow：利用CPU-GPU协作加速机器学习数据预处理](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol17\u002Fp863-kim.pdf)\n- [arxiv'23] [Rinas：使用数据集打乱进行训练可以既通用又快速](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02368)\n- [CVPR'23] [FFCV：通过消除数据瓶颈加速训练](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FLeclerc_FFCV_Accelerating_Training_by_Removing_Data_Bottlenecks_CVPR_2023_paper.pdf)\n- [RecSys'23] [InTune：基于强化学习的深度推荐模型数据流水线优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08500)\n- [SIGMOD'23] GoldMiner：深度学习训练数据预处理流水线的弹性扩展\n- [VLDB'23] [FastFlow：通过输入数据流水线的智能卸载加速深度学习模型训练](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp1086-um.pdf)\n- [SoCC'23] [tf.data service：拆分ML输入数据处理的一个案例](https:\u002F\u002Fanakli.inf.ethz.ch\u002Fpapers\u002Ftfdata_service_SoCC23.pdf)\n  - [arxiv版本](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.14826)\n- [ATC'22] Cachew：将机器学习输入数据处理作为一项服务\n- [OSDI'22] 在多租户集群上调度DNN时超越GPU的视角\n- [ICPP'19] DLBooster：通过卸载数据预处理流水线来提升端到端深度学习工作流\n\n**获取停滞（I\u002FO）**\n- [TACO'23] [Fastensor：优化从SSD到GPU的张量I\u002FO路径以用于深度学习训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3630108)\n- [ICPP'22] Lobster：面向分布式DNN训练的负载均衡感知I\u002FO\n- [SC'21] 面向分布式机器学习I\u002FO的洞察力预取\n\n**特定工作负载（GNN、DLRM）**\n- [VLDB'25] [通过两级特征压缩消除大规模图上GNN训练中的数据处理瓶颈](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3681954.3681968)\n- [ISCA'24] [PreSto：用于训练推荐模型的存储内数据预处理系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14571)\n- [arxiv'23] [迈向以数据为中心的图机器学习：综述与展望](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.10979)\n- [arxiv'23] [FlexShard：面向工业规模序列推荐模型的灵活分片](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02959)\n- [MLSys'23] [RecD：用于端到端深度学习推荐模型训练基础设施的去重](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Fa1126573153ad7e9f44ba80e99316482-Abstract-mlsys2023.html)\n- [ASPLOS'22] [RecShard：基于统计特征的内存优化，用于工业规模的神经推荐](https:\u002F\u002Fwww-cs.stanford.edu\u002Fpeople\u002Ftrippel\u002Fpubs\u002FRecShard-Sethi-ASPLOS-22.pdf)\n- [RecSys'23] [InTune：基于强化学习的深度推荐模型数据流水线优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.08500)\n- [arxiv'23] MTrainS：利用异构内存提高DLRM训练效率\n- [SOSP'23] [Bagpipe：加速深度推荐模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.12429)\n- [SOSP'23] gSampler：面向图学习的通用且高效的基于GPU的图采样\n- [NSDI'23] BGL：通过优化图数据I\u002FO和预处理实现GPU高效的GNN训练\n- [DAC'22] 一种联合管理中间件，用于提升使用SSD的深度推荐系统的训练性能\n- [VLDB'22] 利用流行选择加速推荐系统训练\n\n### 机器学习训练中的缓存与分布式存储\n- [ATC'25] [HyCache：用于加速 DNN 输入预处理流水线的混合缓存](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fjha)\n- [ICDE'25] [MLKV：基于磁盘的键值存储，高效扩展大规模嵌入模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.01506)\n- [TPDS'23] 面向云原生平台数据密集型 AI 应用的高级数据抽象与弹性数据缓存\n- [SOSP'23] UGACHE：面向基于嵌入的深度学习的统一 GPU 缓存\n- [ATC'23] Tectonic-Shift：用于大规模 ML 训练的复合存储架构\n- [EuroSys'23] SiloD：面向深度学习集群的缓存与调度协同设计 [也见于 [2.1](#21-dl-scheduling)]\n- [FAST'23] [SHADE：为分布式深度学习训练实现基础性可缓存性](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Ffast23\u002Fpresentation\u002Fkhan)\n- [HPCA'23] iCACHE：一种基于重要性采样的缓存，用于加速 I\u002FO 瓶颈型 DNN 模型训练\n- [NeurIPS'22] [具有共享数据准备功能的深度学习数据加载器](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F6d538a6e667960b168d3d947eb6207a6-Abstract-Conference.html)\n- [CLUSTER'22] [Hvac：消除大规模深度学习应用的 I\u002FO 瓶颈](https:\u002F\u002Fwww.osti.gov\u002Fservlets\u002Fpurl\u002F1902810)\n- [ICDE'22] Fluid：面向云原生深度学习训练作业的数据集抽象与弹性加速\n- [ATC'21] [焕新您的训练数据：复用部分增强样本以加快深度神经网络训练](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc21\u002Fpresentation\u002Flee)\n- [FAST'20] [Quiver：面向深度学习的智能存储缓存](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Ffast20\u002Fpresentation\u002Fkumar)\n- [ICPP'20] [DIESEL：用于大规模深度学习训练的数据集驱动分布式存储与缓存系统](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3404397.3404472)\n- [arXiv'19] [通过数据回声加速神经网络训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.05550)\n- [HotCloud'19] [在机器学习集群中统一数据加载的理由](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fhotcloud19\u002Fpresentation\u002Fkakaraparthy)\n\n### LLM 数据平面\n- [SIGMOD'26] [Hydraulis：通过并行策略与数据分配的协同设计来平衡大型 Transformer 模型训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3769802)\n- [arxiv'25] [DataFlow：数据驱动型 AI 时代下用于统一数据准备与工作流自动化的框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.16676)\n- [EMNLP'25] [揭秘 LLM 预训练中的合成数据：规模法则、优势与陷阱的系统性研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01631)\n- [ICDE'25] [优化预训练数据管理的训练数据分布估计](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficde\u002F2025\u002F360300e640\u002F26FZD2zy2IM)\n- [arxiv'25] [Mixtera：用于基础模型训练的数据平面](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19790)\n\n### 其他\n**数据格式**\n- [ECCV'22] L3：面向高分辨率、高吞吐量 DNN 训练的加速器友好型无损图像格式\n- [VLDB'21] 渐进式压缩记录：从深度学习数据中节省一个字节\n\n**数据管道的公平性与正确性**\n- [CIDR'21] 原生机器学习管道中数据预处理的轻量级检查\n\n**数据标注自动化**\n- [VLDB'18] Snorkel：利用弱监督快速生成训练数据\n\n## 训练系统\n### GPU 集群上的 ML 作业分析\n- [ICSE'24] [关于深度学习作业低 GPU 利用率的实证研究](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficse\u002F2024\u002F021700a880\u002F1V5BksrVgsg)\n- [NSDI'24] 数据中心内大型语言模型开发的特征分析\n- [NSDI'22] 实际环境中的 MLaaS：大规模异构 GPU 集群中的工作负载分析与调度 (`PAI`)\n- [ATC'19] 大规模多租户 GPU 集群中 DNN 训练工作负载的分析 (`Philly`)\n\n### 资源调度\n- [arxiv'26] [SkyNomad：利用多区域竞价实例最小化 AI 批处理作业成本](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06520)\n\n- [OSDI'25] [解耦与分解：基于 DeDe 的资源分配扩展](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fxu)\n- [SoCC'25] [Cuckoo：面向异构 GPU 的截止时间感知作业打包，用于深度学习模型训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3772052.3772266)\n- [arxiv'25] [面向大型语言模型的语义感知 GPU 集群调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.03334)\n- [arxiv'25] [基于细粒度多 XPU 抽象的自动驾驶应用整体异构调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09503)\n- [arxiv'25] [Tesserae：适用于深度学习工作负载的可扩展放置策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04953)\n- [arxiv'25] [LeMix：面向多 GPU 系统的 LLM 训练与推理统一调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.21276)\n- [EuroSys'25] [Eva：基于云的成本高效集群调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.07437)\n- [arxiv'25] [TAPAS：面向云平台中 LLM 推理的热管理和功耗感知调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.02600)\n\n- [arxiv'24] [Zeal：以“解耦与分解”重新思考大规模资源分配](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11447v1)\n- [TACO'24] [驯服深度学习训练集群中的灵活作业打包](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3711927)\n- [SoCC'24] [Kale：面向在线 DL 模型训练的弹性 GPU 调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3698038.3698532)\n- [arxiv'24] [Rubick：利用作业可重构性进行深度学习集群调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08586)\n- [SC'24] [PAL：面向 GPU 集群中 ML 工作负载调度的变异性感知策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11919)\n- [OSDI'24] [MAST：超大规模下跨地理分布数据中心的全局 ML 训练调度](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fchoudhury)\n- [ASPLOS'24] [Heet：加速异构深度学习集群中的弹性训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640375)\n- [Middleware'24] [异构 GPU 集群中的公平性与最优资源效率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.18545)\n- [IPDPS'24] Hadar：面向深度学习集群的异构感知优化型在线调度\n- [EuroSys'24] [Blox：深度学习调度器的模块化工具包](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12621)\n- [NSDI'24] Swing：通过捷径环路实现更高带宽的 Allreduce\n- [NSDI'24] 面向分布式 DNN 训练的领域特定网络传输\n- [NSDI'24] Vulcan：面向实时 ML 分析的自动查询计划\n- [NSDI'24] [CASSINI：机器学习集群中的网络感知作业调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00852)\n\n- [综述 :mag:] [ACM CSUR'23] [GPU 数据中心中的深度学习工作负载调度：综述](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3638757)\n- [arxiv'23] 面向深度学习的节能型 GPU 集群调度\n- [SC'23] [EasyScale：深度学习的精度一致弹性训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3581784.3607054)\n- [ICPP'23] CoTrain：在 GPU 和 CPU 上并行进行大模型训练的高效调度\n- [ICPP'23] 在 ML 训练中拥抱不确定性以实现资源分配的公平性\n- [SOSP'23] [Sia：面向异构环境、优化吞吐量的 ML 集群调度](https:\u002F\u002Fwww.pdl.cmu.edu\u002FPDL-FTP\u002FBigLearning\u002Fsia_sosp23-final.pdf)\n- [NSDI'23] Shockwave：主动、公平且高效的集群调度，用于机器学习的动态适应\n- [EuroSys'23] SiloD：深度学习集群中缓存与调度的协同设计 [也见于 [1.2](#12-caching)]\n- [EuroSys'23] Lyra：深度学习集群的弹性调度\n- [EuroSys'23] [ElasticFlow：面向分布式深度学习的弹性无服务器训练平台](https:\u002F\u002Fgudiandian.github.io\u002Fattaches\u002Fasplos23\u002Fasplosb23main-p360.pdf)\n- [ASPLOS'23] Lucid：一款非侵入式、可扩展且可解释的深度学习训练作业调度器\n\n- [arxiv'22] [Singularity：AI 工作负载的行星尺度抢占式弹性调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.07848)\n- [综述 :mag:] [arxiv, 2022] [GPU 数据中心中的深度学习工作负载调度：分类、挑战与展望](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11913)\n- [SoCC'22] [ESCHER：使用临时资源的表达式调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3542929.3563498)\n- [NSDI'22] MLaaS 在实际场景中：大规模异构 GPU 集群中的工作负载分析与调度 (`PAI`)\n- [OSDI'22] 超越 GPU：面向多租户集群的 DNN 调度 (`Synergy`)\n- [SIGCOMM'22] 面向深度学习训练的多资源交错调度 (`Muri`)\n\n- [MLSys'21] Wavelet：采用 Tick-Tock 调度实现高效的 DNN 训练\n- [SoCC'21] Chronus：一种新颖的截止时间感知深度学习训练作业调度器\n- [SC'21] [大规模 GPU 数据中心中深度学习工作负载的特征描述与预测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.01313) (`Helios`)\n- [OSDI'21] 隐私预算调度 (`DPF`)\n- [NSDI'21] 分布式深度学习的弹性资源共享 (`AFS`)\n- [OSDI'21] Pollux：为优化吞吐量而协同适应的集群调度\n\n- [EuroSys'20] 在异构 GPU 集群中平衡效率与公平性，用于深度学习 (`GandivaFair`)\n- [NSDI'20] Themis：公平且高效的 GPU 集群调度\n- [OSDI'20] HiveD：在保证权益的前提下共享 GPU 集群进行深度学习\n- [OSDI'20] 面向深度学习工作负载的异构感知集群调度策略 (`Gavel`)\n- [EuroSys'20] AlloX：混合集群中的计算资源分配\n- [MLSys'20] 分布式深度学习中的资源弹性\n\n- [NSDI'19] Tiresias：面向分布式深度学习的 GPU 集群管理器\n- [ATC'19] 大规模多租户 GPU 集群中 DNN 训练工作负载的分析 (`Philly`)\n\n- [EuroSys'18] Optimus：高效的动态资源调度器，专用于深度学习集群\n- [OSDI'18] Gandiva：面向深度学习的内省式集群调度\n\n### 分布式训练\n- [HPCA'26] [WATOS：晶圆级芯片上高效的 LLM 训练策略与架构协同探索](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12279)\n- [ASPLOS'26] [SuperOffload：释放超级芯片上大规模 LLM 训练的强大能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21271)\n\n- [arxiv'25] [深入探索异构抢占式GPU上的3D并行：设计与启示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20953)\n- [arxiv'25] [SIGMA：基于早期硬件的AI赋能训练栈](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13488)\n- [arxiv'25] [BOOST：面向低秩大语言模型的瓶颈优化可扩展训练框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12131)\n- [NeurIPS'25] [协同张量并行与流水线并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.27257)\n- [arxiv'25] [AsyncHZP：用于可扩展LLM训练的异步调度分层ZeRO并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20111)\n- [arxiv'25] [PRISM：大规模分布式训练的概率化运行时洞察与可扩展性能建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15596)\n- [NeurIPS'25] [先注意力后处理：更高效地利用先注意力以提升Transformer训练效率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14614)\n- [arxiv'25] [一种灵活的可编程流水线并行框架，用于高效DNN训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05112)\n- [arxiv'25] [SlimPack：细粒度非对称打包技术，实现均衡高效的变长LLM训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26246)\n- [arxiv'25] [AdaPtis：通过自适应流水线并行减少异构模型中的流水线空泡](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23722)\n- [arxiv'25] [HAPT：面向异构集群的异质性感知自动化并行训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24859)\n- [arxiv'25] [去中心化深度学习中数据并行性的规模化扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12213)\n- [arxiv'25] [Zorse：优化异构GPU集群上的LLM训练效率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10392)\n- [arxiv'25] [TrainVerify：基于等价性的分布式LLM训练验证](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15961)\n- [arxiv'25] [通过GPUDirect Storage实现生命周期感知的张量卸载，以低成本高效训练LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06472)\n- [arxiv'25] [ZenFlow：通过异步更新实现无阻塞卸载训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12242)\n- [arxiv'25] [重新思考动态网络与异构计算：自动并行化方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02787)\n- [arxiv'25] [H2：迈向在超异构集群上高效进行大规模LLM训练，集群规模超过1000个芯片](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17548)\n- [arxiv'25] [动态LLM的均衡且弹性的端到端训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14864)\n- [arxiv'25] [ZenFlow：通过异步更新实现无阻塞卸载训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14468)\n- [arxiv'25] [SpanTrain：在CEE环境中，基于异构GPU和网络的跨领域模型分布式高效训练系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15536)\n- [arxiv'25] [语言模型的并行缩放定律](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.10475)\n- [arxiv'25] [Hetu v2：一种通用且可扩展的深度学习系统，支持分层及异构的单程序多数据标注](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20490)\n- [arxiv'25] [Sailor：自动化跨动态、异构及地理分布的集群进行分布式训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17096)\n- [arxiv'25] [PipeWeaver：通过动态交错流水线应对大型多模态模型训练中的数据动态性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14145)\n- [arxiv'25] [并非所有注意力都必要：基础模型的分布式动态微调](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12471)\n- [arxiv'25] [WLB-LLM：面向大型语言模型训练的工作负载均衡4D并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17924)\n- [arxiv'25] [非均匀张量并行：缓解GPU故障对规模化LLM训练的影响](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06095)\n- [arxiv'25] [CFP：基于低开销性能分析，在保留无通信结构的前提下生成算子内并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.00598)\n- [arxiv'25] [OrchMLLM：通过批次后平衡编排多模态数据，加速多模态大型语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.23830)\n- [arxiv'25] [Cornstarch：分布式多模态训练必须具备多模态意识](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11367)\n- [arxiv'25] [PipeOffload：通过内存优化提升流水线并行的可扩展性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01328)\n- [arxiv'25] [AutoHete：面向LLM的自动高效异构训练系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01890)\n- [arxiv'25] [Astra：在异构GPU上高效且经济地自动搜索并行策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13480)\n- [arxiv'25] [推理效率型语言模型的规模化扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.18107)\n- [arxiv'25] [MiniMax-01：借助闪电注意力扩展基础模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08313)\n\n- [SC'25] [Hypertron：通过探索高维并行化空间实现大模型的高效扩展](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759783)\n- [CLUSTER'25] [BMPipe：面向超大规模深度神经网络训练的气泡内存协同优化策略规划器](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186487\u002F2aCq0Sc5EDm)\n- [OSDI'25] [WLB-LLM：用于大型语言模型训练的工作负载均衡四维并行机制](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fwang-zheng)\n- [ISCA'25] [FRED：用于三维并行深度神经网络训练的晶圆级互连结构](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731055)\n- [ISCA'25] [MeshSlice：面向分布式深度神经网络训练的高效二维张量并行技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731077)\n- [ISCA'25] [利用高效并行策略扩展Llama 3训练规模](https:\u002F\u002Faisystemcodesign.github.io\u002Fpapers\u002FLlama3-ISCA25.pdf)\n- [ICML'25] [HALoS：面向地理分布式大型语言模型训练的慢速网络下分层异步局部SGD算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04531)\n- [MLSys'25] [Radius：基于范围的梯度稀疏性技术，用于大型基础模型预训练](https:\u002F\u002Fopenreview.net\u002Fforum?id=UCQPWBOWb6)\n- [ICLR'25] [TorchTitan：面向生产级大型语言模型预训练的一站式PyTorch原生解决方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06511)\n- [INFOCOM'25] [Espresso：利用云端GPU异构性实现低成本的大模型训练](https:\u002F\u002Ffangmingliu.github.io\u002Ffiles\u002Finfocom25-train.pdf)\n- [TPDS'25] [HpT：在异构众核架构上对时空注意力模型训练进行混合加速](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10820024)\n- [ASPLOS'25] [GraphPipe：通过图流水线并行提升深度神经网络训练的性能与可扩展性](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3669940.3707220)\n- [ASPLOS'25] [FlexSP：通过灵活的序列并行技术加速大型语言模型训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715998)\n- [ASPLOS'25] [Spindle：利用波前调度实现多任务大型模型的高效分布式训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715992)\n- [EuroSys'25] [JABAS：面向异构GPU上深度神经网络训练的联合自适应批处理与自动伸缩技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3696078)\n\n- [arxiv'24] [为大型语言模型自动规划最优并行策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00254)\n- [arxiv'24] [面向数据并行与模型并行的分布式语言模型训练自适应批量大小调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.21124)\n- [arxiv'24] [Frenzy：一种针对异构GPU集群的内存感知无服务器LLM训练系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14479)\n- [arxiv'24] [Echo：大规模分布式训练仿真](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12487)\n- [arxiv'24] [利用MPMD流水线并行扩展深度学习训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14374)\n- [arxiv'24] [揭秘变长序列下大型Transformer模型训练中的负载不均衡问题](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07894)\n- [arxiv'24] [HETHUB：面向大规模模型的异构集群分布式训练系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16256)\n- [arxiv'24] [以数据为中心且适应异构性的序列并行：高效LLM训练方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01523)\n- [arxiv'24] [借助4D并行与内存消耗估算器加速大型语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06465)\n- [arxiv'24] [BitPipe：双向交错流水线并行加速大模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19367)\n- [arxiv'24] [Cephalo：利用异构GPU集群训练Transformer模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01075)\n- [arxiv'24] [SimpleFSDP：结合torch.compile的更简单全分片数据并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00284)\n- [arxiv'24] [FusionLLM：基于地理分布GPU的去中心化LLM训练系统，支持自适应压缩](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12707)\n- [arxiv'24] [PipeFill：在流水线并行LLM训练的空闲期利用GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07192)\n- [arxiv'24] [Poplar：在异构GPU集群上高效扩展分布式DNN训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2408.12596)\n- [arxiv'24] [DistTrain：通过解耦式训练应对多模态大型语言模型中的模型与数据异构性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04275)\n- [arxiv'24] [基于数据异构性感知的模型管理实现高效多任务大型模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.03365)\n- [arxiv'24] [FlashFlex：适应异构环境的大型语言模型训练方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.01143v1)\n- [arxiv'24] [PARALLELGPUOS：基于验证推测的并发OS级GPU检查点与恢复系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12079)\n- [arxiv'24] [Unicron：规模化自愈型LLM训练的经济性优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00134)\n- [arxiv'24] [TBA：利用基于SSD的激活卸载加速大型语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.10013)\n- [arxiv'24] [Optimus：通过挖掘空隙加速大规模多模态LLM训练](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.03505)\n- [综述 :mag:] [arxiv'24] [分布式基础设施上大型语言模型的高效训练：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20018)\n- [arxiv'24] [LoongTrain：采用头部上下文并行高效训练长序列LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18485)\n- [arxiv'24] [PAFT：用于高效LLM微调的并行训练范式](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17923)\n- [arxiv'24] [BurstAttention：面向超长序列的高效分布式注意力框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.09347)\n- [arxiv'24] [Branch-Train-MiX：将专家LLM混合进混合专家LLM中](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07816)\n- [arxiv'24] [通过灵活的工作负载控制加速异构张量并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11469)\n- [arxiv'24] [GRAWA：基于梯度的加权平均法用于深度学习模型的分布式训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04206)\n- [arxiv'24] [BitDelta：你的微调可能只值一个比特](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10193)\n- [arxiv'24] [NutePrune：为大型语言模型提供高效渐进式剪枝，配备多位教师](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09773)\n- [arxiv'24] [加速扩散模型的并行采样](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09970)\n- [arxiv'24] [在异构集群上以最佳性能训练DNN模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.05302)\n- [arxiv'24] [打破MLPerf训练纪录：以BERT优化为例](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02447)\n- [arxiv'24] [LocMoE：用于大型语言模型训练的低开销MoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.13920)\n- [arxiv'24] [重新评估内存平衡流水线并行：BPipe](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02088)\n- [arxiv'24] [InternEvo：通过混合并行与冗余分片实现高效长序列大型语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.09149)\n\n- [TPDS'24] [UMPIPE：基于不等微批次的深度神经网络训练流水线并行]（https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftd\u002F5555\u002F01\u002F10792656\u002F22AQNnaMR6U）\n- [综述 :mag:] [ACM CSUR'24] [基础模型的资源高效算法与系统：综述]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3706418）\n- [SOSP'24] [利用FractalTensor揭示DNN计算中的嵌套数据并行与数据重用]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3694715.3695961）\n- [SOSP'24] [实现大规模语言模型高效训练的并行性热切换]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695969）\n- [TACO'24] [ATP：通过智能GPU内存管理实现DNN训练的吞吐量峰值]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3701996）\n- [NeurIPS'24] [重新思考内存与通信开销，以实现大规模语言模型的数据并行训练效率提升]（https:\u002F\u002Fopenreview.net\u002Fforum?id=4Un2TD9bNe）\n- [NeurIPS'24] [SpeedLoader：一种面向异构分布式LLM运行的高效I\u002FO方案]（https:\u002F\u002Fopenreview.net\u002Fforum?id=Y2I0Fy4sm7）\n- [SC'24] [通过优化TT分解与微批次加速分布式DLRM训练]（https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100a776\u002F21HUVYHhG1O）\n- [SC'24] [ democratizing AI：基于GPU的超级计算机上开源可扩展的LLM训练]（https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100a036\u002F21HUV5yQsyQ）\n- [SoCC'24] [在AWS Trainium上进行大规模语言模型的分布式训练]（https:\u002F\u002Fwww.amazon.science\u002Fpublications\u002Fdistributed-training-of-large-language-models-on-aws-trainium）\n- [TPDS'24] [AutoDDL：近似最优带宽代价的自动分布式深度学习]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06813）\n- [SOSP'24] 实现大规模语言模型高效训练的并行性热切换\n- [SOSP'24] [TENPLEX：使用可并行张量集合动态调整深度学习作业资源]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05181）\n- [ICPP'24] [AutoPipe：共享GPU集群中流水线并行性的自动配置]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673047）\n- [COLM'24] [LightSeq：面向长上下文Transformer分布式训练的序列级并行]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.03294）\n- [OSDI'24] [nnScaler：面向深度学习训练的约束引导并行化计划生成]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Flin-zhiqi）\n  - [arxiv'23] [SuperScaler：通过统一抽象支持灵活的DNN并行化]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08984）\n- [ATC'24] [利用高效的激活重计算与最优混合并行化加速大规模语言模型训练]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fyuan）\n- [ATC'24] [Metis：在异构GPU上实现快速自动分布式训练]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fum）\n- [ATC'24] [FwdLLM：通过扰动推理实现大规模语言模型的高效联邦微调]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fxu-mengwei）\n- [ATC'24] [OPER：面向大规模推荐模型的最优性指导嵌入表并行化]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fwang）\n- [HPDC'24] [DataStates-LLM：面向大规模语言模型的惰性异步检查点]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.10707v1）\n- [ICML'24] [突破GPU显存限制，实现大型专家混合模型训练]（https:\u002F\u002Fopenreview.net\u002Fforum?id=uLpyWQPyF9）\n- [ICML'24] [集成硬件架构与设备放置搜索]（https:\u002F\u002Fopenreview.net\u002Fpdf?id=ucl3B05EsX）\n- [MLSys'24] [DiffusionPipe：利用高效流水线训练大型扩散模型]（https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F45c1f6a8cbf2da59ebf2c802b4f742cd-Paper-Conference.pdf）\n- [MLSys'24] [Lancet：通过全图计算-通信重叠加速专家混合模型训练]（https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F339caf45a6fa281cae8adc6465343464-Paper-Conference.pdf）\n- [MobiCom'24] [Asteroid：面向异构边缘设备协作DNN训练的资源高效混合流水线并行]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3636534.3649363）\n- [EuroSys'24] [DynaPipe：通过动态流水线优化多任务训练]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3627703.3629585）\n- [EuroSys'24] [ScheMoE：具有任务调度功能的可扩展专家混合分布式训练系统]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3650083）\n- [EuroMLSys@EuroSys'24] [云GPU短缺下的ML训练：跨区域是解决方案吗？]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3642970.3655843）\n- [ASPLOS'24] [AdaPipe：通过自适应重计算与划分优化流水线并行]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651359）\n- [ASPLOS'24] [PrimePar：面向大型Transformer模型训练的高效时空张量划分]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651357）\n- [EuroSys'24] [Aceso：通过迭代缓解瓶颈实现高效并行DNN训练]（https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3629554）\n- [NSDI'24] [MegaScale：将大规模语言模型训练扩展至超过1万台GPU]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fjiang-ziheng）\n- [NSDI'24] [DISTMM：加速多模态模型的分布式训练]（https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fhuang）\n- [NSDI'24] 利用嵌入调度加速神经推荐训练\n- [NSDI'24] 大规模弹性：管理Google的TPUv4机器学习超级计算机\n- [NSDI'24] QuickUpdate：面向大规模推荐模型的实时个性化系统\n- [NSDI'24] [将大规模语言模型训练扩展至超过1万台GPU]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15627）\n- [TKDE'24] [通过平衡内存负载优化提升自动并行训练效果]（https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10449463）\n  - Galvatron（VLDB'23）的扩展版本\n  - arxiv版本（2023年）：[链接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02031)\n- [ICLR'24] [零气泡（几乎）流水线并行]（https:\u002F\u002Fopenreview.net\u002Fforum?id=tuzTN0eIO5）\n- [ICLR'24] [CO2：实现完全通信-计算重叠的高效分布式训练]（https:\u002F\u002Fopenreview.net\u002Fforum?id=ZO5cn4IfaN）\n  - [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16265)\n- [AAMAS'24] [Holonic Learning：一种灵活的基于代理的分布式机器学习框架]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.10839）\n- [VLDB'24] [Saturn：面向多大型模型深度学习工作负载的优化数据系统]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.01226）\n- [HPCA'24] [Tessel：通过灵活的调度搜索提升大型DNN模型的分布式执行效率]（https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15269）\n- [NSDI'24] Parcae：在抢占式实例上进行主动、面向吞吐量优化的DNN训练\n- [EuroSys'24] [HAP：在异构GPU集群上进行SPMD DNN训练，并采用自动化程序合成]（https:\u002F\u002Fi.cs.hku.hk\u002F~cwu\u002Fpapers\u002Fswzhang-eurosys24.pdf）\n\n- [arxiv'23] [vTrain：用于评估经济高效且计算最优的大规模语言模型训练的仿真框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.12391)\n- [arxiv'23] [ASPEN：使用单个GPU进行大规模语言模型的高吞吐量LoRA微调](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02515)\n- [arxiv'23] [FlexModel：面向分布式大语言模型可解释性的框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03140)\n- [arxiv'23] [Holmes：面向异构网卡环境下的跨集群分布式训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.03549)\n- [arxiv'23] [RTP：通过内存去重重新思考张量并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.01635)\n- [arxiv'23] [FP8-LM：FP8大语言模型的训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18313)\n- [arxiv'23] [Redco：一种轻量级工具，可在任何GPU\u002FTPU上自动化LLM的分布式训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.16355)\n- [arxiv'23] [DeepSpeed Ulysses：用于支持极端长序列Transformer模型训练的系统优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.14509)\n- [arxiv'23] [分布式数据并行PyTorch实现的分布式Shampoo优化器，用于大规模神经网络训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06497)\n- [arxiv'23] [FLM-101B：一个开源LLM及其如何以10万美元预算进行训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.03852.pdf)\n- [arxiv'23] [UniAP：通过混合整数二次规划统一层间与层内自动并行化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.16375)\n- [arxiv'23] 使用大型语言模型对并行程序建模\n- [arxiv'23] [Proteus：模拟分布式DNN训练的性能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02267)\n- [arxiv'23] 用于高效基础模型训练的带重叠通信的自动张量模型并行\n- [arxiv'23] 用于深度学习训练的解耦模型调度\n- [arxiv'23] RAF：面向深度学习模型训练的整体编译\n- [arxiv'23] Ada-Grouper：通过针对微批次的适应性分组调度加速抢占式网络中的流水线并行\n- [arxiv'23] 压缩激活值是否有助于模型并行训练？\n- [arxiv'23] Colossal-Auto：面向大规模模型的并行化与激活检查点的统一自动化\n- [arxiv'23] 将视觉Transformer扩展至220亿参数\n- [arxiv'23] 使用Rhino自动并行化大型模型：生产级AI平台上的系统化方法\n- [arxiv'23] TAP：通过张量自动并行化加速大规模DNN训练\n- [arxiv'23] [SuperScaler：通过统一抽象支持灵活的DNN并行化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.08984)\n- [arxiv'23] ATP：面向基础模型的适应性张量并行\n\n- [ICPP'23] Mercury：面向大型深度学习模型的快速且最优的设备放置\n- [IPDPS'23] [MPipeMoE：具有适应性流水线并行的预训练模型内存高效MoE](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177396)\n- [CLUSTER'23] Prophet：面向大规模MoE模型并行训练的细粒度负载均衡\n- [NeurIPS'23] [ASPEN：打破算子障碍，实现深度神经网络的高效并行化](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002Fd899a31938c7838965b589d9b14a5ca6-Abstract-Conference.html)\n- [NeurIPS'23] [DeepPCR：神经网络中顺序操作的并行化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.16318)\n- [DAC'23] MixPipe：用于训练大规模模型的高效双向流水线并行\n- [SC'23] [Hanayo：利用波浪式流水线并行提升大型模型训练效率](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2308.15762.pdf)\n- [SOSP'23] PIT：通过置换不变变换优化动态稀疏深度学习模型\n- [SOSP'23] [Oobleck：使用流水线模板实现大型模型的弹性分布式训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3600006.3613152)\n- [TPDS'23] [Fold3D：重新思考并并行化大型DNN模型训练中的计算与通信任务](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10050126)\n- [MICRO'23] [Grape：面向GPU上的动态深度神经网络的实用高效图执行](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3613424.3614248)\n- [HPCA'23] [Phloem：通过细粒度流水线并行自动加速不规则应用](https:\u002F\u002Fpeople.csail.mit.edu\u002Fqmn\u002Fpapers\u002Fnguyen_phloem_hpca_2023_preprint.pdf)\n- [ACL'23] [序列并行：从系统视角看长序列训练](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.134\u002F)\n- [CCGrid'23] 一种深度学习流水线并行优化方法\n- [OSDI'23] MGG：在多GPU平台上通过细粒度的核内通信-计算流水线加速图神经网络\n- [ATC'23] Lina：加速分布式MoE训练与推理\n- [ATC'23] SmartMoE：通过结合离线与在线并行化高效训练稀疏激活模型\n- [ATC'23] MSRL：基于数据流片段的分布式强化学习\n- [综述：mag:] [TPDS'23] 大规模深度学习训练的自动并行化综述\n- [ICML'23] [SWARM并行：大型模型的训练竟可如此高效地减少通信量](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.11913)\n- [ICML'23] BPipe：用于训练大型语言模型的内存平衡型流水线并行\n- [ICS'23] 一种混合张量-专家-数据并行方法，用于优化混合专家训练\n- [NSDI'23] TopoOpt：为分布式训练作业协同优化网络拓扑与并行化策略\n- [NSDI'23] Bamboo：使抢占式实例更具弹性，从而以低成本训练大型DNN\n- [NSDI'23] [ARK：面向分布式深度学习的GPU驱动代码执行](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi23\u002Fpresentation\u002Fhwang)\n- [SIGMOD'23] FlexMoE：通过动态设备放置扩展大规模稀疏预训练模型的训练规模\n- [MLSys'23] 关于优化模型并行通信的讨论\n- [MLSys'23] [MegaBlocks：利用混合专家实现高效的稀疏训练](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F5a54f79333768effe7e8927bcccffe40-Abstract-mlsys2023.html)\n- [MLSys'23] [Tutel：规模化下的自适应混合专家](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F9412531719be7ccf755c4ff98d0969dc-Abstract-mlsys2023.html)\n- [TPDS'23] Merak：一个高效的分布式DNN训练框架，为巨型基础模型提供自动3D并行\n- [PPoPP'23] 弹性平均用于高效的流水线DNN训练\n- [PPoPP'23] 在光互连系统中为分布式DNN训练实现高效的All-Reduce\n- [VLDB'23] MiCS：在公有云上以近线性速度扩展巨型模型的训练规模\n- [VLDB'23] Galvatron：利用自动并行化在多GPU上高效训练Transformer\n- [ASPLOS'23] Mobius：在通用GPU服务器上微调大规模模型\n- [ASPLOS'23] Optimus-CC：通过3D并行感知通信压缩实现高效的大型NLP模型训练\n\n- [arxiv'22] Colossal-AI：面向大规模并行训练的统一深度学习系统\n- [arxiv'22] 使用DeepSpeed和Megatron训练Megatron-Turing NLG 530B，一个大规模生成式语言模型\n- [ICPP'22] [Tesseract：高效并行化张量并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.14500)\n- [MLSys'22] [在分层系统上为深度学习合成最优并行化布局与规约策略](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Ff0f9e98bc2e2f0abc3e315eaa0d808fc-Abstract.html)\n  - [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10548)\n- [NeurIPS'22] 利用有保证的激活量化在低速网络上微调语言模型\n- [SoCC'22] 使用SPMD并行化加速大规模分布式神经网络训练\n- [MLSys'22] Pathways：面向ML的异步分布式数据流\n- [MLSys'22] [SRIFTY：云端快速且经济高效的分布式神经网络训练](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F0cafb7890f6a7d4de65507d5bb7e0187-Abstract.html)\n- [MLSys'22] [通过突发并行训练实现高效的强缩放](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Fb99e69074b2fa1d8c8fe0d5b60e19397-Abstract.html)\n- [EuroSys'22] Varuna：可扩展、低成本的大规模深度学习模型训练\n- [ATC'22] Whale：在异构GPU上高效训练巨型模型\n- [NeurIPS'22] [AMP：自动发现考虑异构性的模型并行策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07297)\n- [PPoPP'22] [FasterMoE：建模与优化大规模动态预训练模型的训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3503221.3508418)\n- [ICML'22] [DeepSpeed-MoE：推进专家混合模型的推理与训练，以支持下一代AI规模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05596)\n- [ICML'22] [GLaM：利用专家混合模型高效扩展语言模型](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fdu22c\u002Fdu22c.pdf)\n- [HPDC'22] Hare：在异构GPU上挖掘分布式机器学习的任务间与任务内并行性\n- [OSDI'22] Alpa：自动化分布式深度学习中的算子间与算子内并行\n- [NSDI'22] 加速跨深度学习框架的数据并行训练中的集体通信\n\n- [arxiv'21] Amazon SageMaker模型并行：一种通用且灵活的大模型训练框架\n- [arxiv'21] GSPMD：面向ML计算图的通用且可扩展并行化方法\n- [JMLR'21] [Switch Transformers：通过简单高效的稀疏性扩展至万亿参数模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.03961)\n- [TPDS'21] TensorOpt：探索自动并行化在分布式DNN训练中的权衡\n- [ATC'21] 在通用硬件上使用自动流水线模型并行微调巨型神经网络\n- [SIGMOD'21] 基于部分规约的异构感知分布式机器学习训练 [也见[2.10](#210-communication-optimization)]\n- [MLSys'21] PipeMare：异步流水线并行DNN训练\n- [ICLR'21] GShard：利用条件计算与自动分片扩展巨型模型\n- [NeurIPS'21] Piper：用于DNN并行化的多维规划器\n- [ICML'21] 内存高效的流水线并行DNN训练\n- [ICML'21] TeraPipe：用于训练大规模语言模型的令牌级流水线并行\n- [ICML'21] PipeTransformer：用于大规模模型分布式训练的自动化弹性流水线\n- [SC'21] Chimera：利用双向流水线高效训练大规模神经网络\n- [SC'21] 使用Megatron-LM（`PTD-P`或`Megatron-LM v2`）在GPU集群上高效训练大规模语言模型\n- [FAST'21] Behemoth：面向超大规模DNN的闪存中心型训练加速器\n- [PPoPP'21] DAPPLE：一种用于训练大型模型的流水线式数据并行方法\n- [VLDB'21] 数据系统上的分布式深度学习：方法比较分析\n\n- [HPCA'20] AccPar：面向异构深度学习加速器的张量划分\n- [NeurIPS'20] DNN图中算子设备放置的有效算法\n- [arxiv'20] Megatron-LM：利用模型并行训练数十亿参数的语言模型\n- [KDD'20教程] DeepSpeed：系统优化使超过1000亿参数的深度学习模型得以训练\n- [VLDB'20] PyTorch Distributed：加速数据并行训练的经验\n- [OSDI'20] 一种用于加速异构GPU\u002FCPU集群中分布式DNN训练的统一架构（`BytePS`）\n- [SOSP'19] PipeDream：DNN训练的广义流水线并行\n- [NeurIPS'20] [语言模型是少样本学习者](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2020\u002Fhash\u002F1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html?utm_medium=email&utm_source=transaction) [**来自OpenAI**]\n  - [arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.14165)\n- [arxiv'20] [神经语言模型的扩展规律](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.08361) [**来自OpenAI**]\n\n- [HPCA'19] [HyPar：迈向深度学习加速器阵列的混合并行化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02067)\n- [IEEE MICRO'19] [优化深度学习训练中的多GPU并行化策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.13257)\n- [MLSys'19] [超越数据与模型并行化的深度神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.05358)（`FlexFlow`）\n- [MLSys'19] TicTac：通过通信调度加速分布式深度学习\n- [EuroSys'19] Parallax：面向稀疏性的深度神经网络数据并行训练\n- [EuroSys'19] 利用自动数据流图分区支持超大规模模型（`Tofu`）\n- [SOSP'19] 一种用于加速分布式DNN训练的通用通信调度器\n- [NeurIPS'19] Mesh-TensorFlow：面向超级计算机的深度学习\n- [NeurIPS'19] GPipe：利用流水线并行高效训练巨型神经网络\n- [ICML'18] 探索卷积神经网络并行化的隐藏维度\n\n- [综述 :mag:] [IJCAI'22] 大型神经网络高效训练综述\n- [综述 :mag:] [ACM CSUR'19] 解密并行与分布式深度学习\n- [综述 :mag:] [ACM CSUR'19] 分布式基础设施上的可扩展深度学习：挑战、技术和工具\n\n\n\n### 自动机器学习\n- [OSDI'23] Hydro：数据中心中的基于代理的超参数调优服务\n- [NSDI'23] ModelKeeper：通过自动化训练预热加速DNN训练\n- [OSDI'20] Retiarii：一个深度学习探索性训练框架\n\n### GNN 训练系统\n> 有关 GNN 系统论文的完整列表，请参阅 [https:\u002F\u002Fgithub.com\u002Fchwan1016\u002Fawesome-gnn-systems](https:\u002F\u002Fgithub.com\u002Fchwan1016\u002Fawesome-gnn-systems)。\n\n- [PPoPP'26] [TAC：基于缓存的系统，用于加速多 GPU 平台上的百亿规模 GNN 训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786460)\n- [PPoPP'26] [ElasGNN：面向分布式 GNN 训练的弹性训练框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786440)\n- [SC'25] [Plexus：利用三维并行全图 GNN 训练驯服百亿边图](https:\u002F\u002Fpssg.cs.umd.edu\u002Fassets\u002Fpapers\u002F2025-11-plexus-sc.pdf)\n- [SIGMOD'25] [NeutronHeter：针对异构集群优化分布式图神经网络训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3749175)\n- [ICDE'25] [CaliEX：一种基于磁盘的大规模 GNN 训练系统，融合了缓存与执行的设计](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficde\u002F2025\u002F360300c908\u002F26FZBj8WvyU)\n- [arxiv'25] [Plexus：利用三维并行 GNN 训练驯服百亿边图](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.04083)\n- [HPCA'25] [Mithril：面向深度 GNN 训练的可扩展系统](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fhpca\u002F2025\u002F064700b052\u002F25Ko4zIl7So)\n- [arxiv'25] [Armada：大规模图神经网络的内存高效分布式训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17846)\n- [VLDB'25] [NeutronTP：具有张量并行性的负载均衡分布式全图 GNN 训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20379)\n- [arxiv'24] [FastGL：一种 GPU 高效框架，用于加速大规模基于采样的 GNN 训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14939)\n- [ICPP'24] [GNNDrive：降低基于磁盘的 GNN 训练中的内存竞争与 I\u002FO 拥堵](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673063)\n- [VLDB'24] [NeutronStream：面向图流的滑动窗口动态 GNN 训练框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.02473)\n- [arxiv'23] [ReFresh：通过利用稳定的历史嵌入来减少图神经网络训练中的内存访问](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.07482)\n- [arxiv'23] [Helios：在 TB 级图上实现内存内性能的高效外存 GNN 训练系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.00837)\n- [arxiv'23] [GNNPipe：利用流水线式模型并行加速分布式全图 GNN 训练](https:\u002F\u002Fbrowse.arxiv.org\u002Fpdf\u002F2308.10087.pdf)\n- [MLSys'23] [分布式全图 GNN 训练中的自适应消息量化与并行化](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F0ea77501c3f6bcba97e082d03a40646d-Abstract-mlsys2023.html)\n- [SIGMOD'23] DUCATI：一种双缓存训练系统，适用于使用 GPU 的巨型图上的图神经网络\n- [OSDI'23] [MGG：在多 GPU 平台上通过细粒度的核内通信—计算流水线加速图神经网络](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi23-wang-yuke.pdf)\n- [EuroSys'23] [MariusGNN：资源高效的图神经网络外存训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.02365)\n- [KDD'22] [面向十亿规模异构图的图神经网络分布式混合 CPU 和 GPU 训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3534678.3539177)\n- [VLDB'22] [TGL：一个用于十亿规模图上时序 GNN 训练的通用框架](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol15\u002Fp1572-zhou.pdf)\n- [OSDI'21] [P3：大规模分布式深度图学习](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi21\u002Fpresentation\u002Fgandhi)\n\n## 推理系统\n- [MLSys'26] [满足 SLO 要求，大幅缩短时间：使用 OptiKIT 自动优化企业级 LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20408)\n- [arxiv'26] [Laser：解锁层级调度，实现高效的多 SLO LLM 服务](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786413)\n- [arxiv'26] [推测解码：性能还是幻觉？](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.11580)\n- [arixv'26] [计划、验证与填充：扩散语言模型的结构化并行解码方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12247)\n- [arxiv'26] [PLA-Serve：一种预填充长度感知的 LLM 服务系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.11589)\n- [PPoPP'26] [加速 GPU 上的稀疏 Transformer 推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06095)\n- [VLDB'26] [ORBITFLOW：具有细粒度 KV 缓存重构功能的 SLO 友好长上下文 LLM 服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10729)\n- [IEEE Computer'26] [大型语言模型推理硬件面临的挑战与研究方向](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05047)\n- [arxiv'26] [AIConfigurator：面向多框架 LLM 服务的闪电般快速配置优化工具](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06288)\n- [arxiv'26] [FlashInfer-Bench：构建 AI 驱动的 LLM 系统良性循环](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00227)\n- [NSDI'26] [FlexLLM：在保证 SLO 的前提下，实现 LLM 推理与微调的标记级协同服务](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Foliaro)\n- [NSDI'26] [FastServe：面向大型语言模型推理的迭代级抢占式调度](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Fwu-bingyang)\n- [NSDI'26] [HydraServe：最大限度地减少公有云中无服务器 LLM 服务的冷启动延迟](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Flou)\n- [FPGA'26] [CXL-SpecKV：用于数据中心 LLM 服务的分离式 FPGA 推测 KV 缓存](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11920)\n- [ASPLOS'26] [XY-Serve：面向动态 LLM 工作负载的端到端多功能生产级服务](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3760250.3762228)\n- [AAAI'26] [Lethe：面向推理密集型 LLM 服务的层和时间自适应 KV 缓存修剪](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06029)\n- [EuroSys'26] [FlexPipe：通过在碎片化的无服务器集群中进行飞行中的流水线重构，灵活调整动态 LLM 服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.11938)\n- [EuroSys'26] [KunServe：以参数为中心的内存管理，用于高效处理 LLM 服务中的内存过载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18169)\n- [EuroSys'26] [TokenFlow：通过抢占式调度，在请求突发情况下实现响应迅速的 LLM 文本流媒体服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02758)\n\n- [SoCC'25] Multiplexed Heterogeneous LLM Serving via Stage-Aligned Parallelism\n- [arxiv'25] [TraCT: Disaggregated LLM Serving with CXL Shared Memory KV Cache at Rack-Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18194)\n- [arxiv'25] [L4: Low-Latency and Load-Balanced LLM Serving via Length-Aware Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19179)\n- [arxiv'25] [Efficient Multi-Adapter LLM Serving via Cross-Model KV-Cache Reuse with Activated LoRA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.17910)\n- [arxiv'25] [EVICPRESS: Joint KV-Cache Compression and Eviction for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14946)\n- [arxiv'25] [Staggered Batch Scheduling: Co-optimizing Time-to-First-Token and Throughput for High-Efficiency LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.16134)\n- [arxiv'25] [MultiPath Transfer Engine: Breaking GPU and Host-Memory Bandwidth Bottlenecks in LLM Services](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.16056)\n- [arxiv'25] [PROSERVE: Unified Multi-Priority Request Scheduling for LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12928)\n- [arxiv'25] [xGR: Efficient Generative Recommendation Serving at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11529)\n- [arxiv'25] [ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13586)\n- [arxiv'25] [TokenScale: Timely and Accurate Autoscaling for Disaggregated LLM Serving with Token Velocity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03416)\n- [arxiv'25] [AugServe: Adaptive Request Scheduling for Augmented Large Language Model Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04013)\n- [arxiv'25] [Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.01278)\n- [arxiv'25] [SIMPLE: Disaggregating Sampling from GPU Inference into a Decision Plane for Faster Distributed LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.00719)\n- [arxiv'25] [OmniInfer: System-Wide Acceleration Techniques for Optimizing LLM Serving Throughput and Latency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22481)\n- [arxiv'25] [OOCO: Latency-disaggregated Architecture for Online-Offline Co-locate LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21862)\n- [arxiv'25] [Serving Heterogeneous LoRA Adapters in Distributed LLM Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22880)\n- [arxiv'25] [Harli: SLO-Aware Co-location of LLM Inference and PEFT-based Finetuning on Model-as-a-Service Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11729)\n- [arxiv'25] [CLO: Efficient LLM Inference System with CPU-Light KVCache Offloading via Algorithm-System Co-Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14510)\n- [arxiv'25] [FengHuang: Next-Generation Memory Orchestration for AI Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10753)\n- [arxiv'25] [FLUXSERVE: Serving a Variety of LLMs for Best-Effort Efficiency via Dynamic Temperature-Aware Multiplexing](https:\u002F\u002Fopenreview.net\u002Fpdf?id=I1CGMNNX5i)\n- [arxiv'25] [Synera: Synergistic LLM Serving across Device and Cloud at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07423)\n- [arxiv'25] [DuetServe: Harmonizing Prefill and Decode for LLM Serving via Adaptive GPU Multiplexing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.04791)\n- [Middleware'25] [Argus: Quality-Aware High-Throughput Text-to-Image Inference Serving System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06724)\n- [arxiv'25] [From Models to Operators: Rethinking Autoscaling Granularity for Large Generative Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02248)\n- [arxiv'25] [TapOut: A Bandit-Based Approach to Dynamic Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02017)\n- [NeurIPS'25] [SuffixDecoding: Extreme Speculative Decoding for Emerging AI Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04975)\n- [arxiv'25] [FREESH: Fair, Resource- and Energy-Efficient Scheduling for LLM Serving on Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00807)\n- [EMNLP'25] [Distributed LLM Serving on Consumer-Grade GPUs by Reconciling Computation and Communication](https:\u002F\u002Faclanthology.org\u002F2025.findings-emnlp.957.pdf)\n- [arxiv'25] [Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18245)\n- [MICRO'25] [MX+: Pushing the Limits of Microscaling Formats for Efficient Large Language Model Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756118)\n- [MICRO'25] [Kelle: Co-design KV Caching and eDRAM for Efficient LLM Serving in Edge Computing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3725843.3756071)\n- [arxiv'25] [SPAD: Specialized Prefill and Decode Hardware for Disaggregated LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08544)\n- [arxiv'25] [From Tokens to Layers: Redefining Stall-Free Scheduling for LLM Serving with Layered Prefill](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08055)\n- [CLUSTER'25] [Scalable and Fast Inference Serving via Hybrid Communication Scheduling on Heterogeneous Networks](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186468\u002F2aCq2GqaO6Q)\n- [CLUSTER'25] [Rock: Serving Multimodal Models in Cloud with Heterogeneous-Aware Resource Orchestration for Thousands of LoRA Adapters](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186463\u002F2aCq3B9XD6o)\n- [arxiv'25] [TridentServe: A Stage-level Serving System for Diffusion Pipelines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02838)\n- [arxiv'25] [MACE: A Hybrid LLM Serving System with Colocated SLO-aware Continuous Retraining Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.03283)\n- [Survey :mag:] [ACM CSUR'25] [Towards Efficient Generative Large Language Model Serving: A Survey from Algorithms to Systems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3754448)\n- [SOSP'25] [Aegaeon: Effective GPU Pooling for Concurrent LLM Serving on the Market](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764815)\n- [SOSP'25] [IC-Cache: Efficient Large Language Model Serving via In-context Caching](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764829)\n- [SOSP'25] [DiffKV: Differentiated Memory Management for Large Language Models with Parallel KV Compaction](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764810)\n- [arxiv'25] [TetriServe: Efficient DiT Serving for Heterogeneous Image Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01565)\n- [arxiv'25] [Parallax: Efficient LLM Inference Service over Decentralized Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26182)\n- [arxiv'25] [RServe: Overlapping Encoding and Prefill for Efficient LMM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24381)\n- [arxiv'25] [Cronus: Efficient LLM inference on Heterogeneous GPU Clusters via Partially Disaggregated Prefill](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17357)\n- [arxiv'25] [Shift Parallelism: Low-Latency, High-Throughput LLM Inference for Dynamic Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.16495)\n- [COLM'25] [OverFill: Two-Stage Models for Efficient Language Model Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08446)\n- [ACM MM'25] [TinyServe: Query-Aware Cache Selection for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12211)\n- [arxiv'25] [Scaling Up Throughput-oriented LLM Inference Applications on Heterogeneous Opportunistic GPU Clusters with Pervasive Context Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13201)\n- [SC'25] [Hetis: Serving LLMs in Heterogeneous GPU Clusters with Fine-grained and Dynamic Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08309)\n- [arxiv'25] [FineServe: Precision-Aware KV Slab and Two-Level Scheduling for Heterogeneous Precision LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.06261)\n- [arxiv'25] [AdaptCache: KV Cache Native Storage Hierarchy for Low-Delay and High-Quality Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00105)\n- [arxiv'25] [Predictable LLM Serving on GPU Clusters](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20274)\n- [SIGCOMM'25] [SCX: Stateless KV-Cache Encoding for Cloud-Scale Confidential Transformer Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3718958.3750509)\n- [arxiv'25] [Taming the Chaos: Coordinated Autoscaling for Heterogeneous and Disaggregated LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19559)\n- [arxiv'25] [Rethinking Caching for LLM Serving Systems: Beyond Traditional Heuristics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18736)\n- [OSDI'25] [BlitzScale: Fast and Live Large Model Autoscaling with O(1) Host Caching](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fzhang-dingyan)\n- [OSDI'25] [WaferLLM: Large Language Model Inference at Wafer Scale](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fhe)\n- [OSDI'25] [NanoFlow: Towards Optimal Large Language Model Serving Throughput](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fzhu-kan)\n- [arxiv'25] [TokenLake: A Unified Segment-level Prefix Cache Pool for Fine-grained Elastic Long-Context LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17219)\n- [arxiv'25] [HyperFlexis: Joint Design of Algorithms and Systems for Multi-SLO Serving and Fast Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.15919)\n- [arxiv'25] [Equinox: Holistic Fair Scheduling in Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.16646)\n- [arxiv'25] [Efficient Mixed-Precision Large Language Model Inference with TurboMind](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.15601)\n- [ICML'25] [Packrat: Automatic Reconfiguration for Latency Minimization in CPU-based DNN Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18174)\n- [arxiv'25] [Kairos: Low-latency Multi-Agent Serving with Shared LLMs and Excessive Loads in the Public Cloud](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06948)\n- [arxiv'25] [Block: Balancing Load in LLM Serving with Context, Knowledge and Predictive Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03611)\n- [arxiv'25] [Prefill-Decode Aggregation or Disaggregation? Unifying Both for Goodput-Optimized LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.01989)\n- [arxiv'25] [Unlock the Potential of Fine-grained LLM Serving via Dynamic Module Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.18006)\n- [ACL'25] [MiniKV: Pushing the Limits of 2-Bit KV Cache via Compression and System Co-Design for Efficient Long Context Inference](https:\u002F\u002Faclanthology.org\u002F2025.findings-acl.952.pdf)\n- [ACL'25] [StitchLLM: Serving LLMs, One Block at a Time](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.1305.pdf)\n- [ACL'25] [SPECTRA: Faster Large Language Model Inference with Optimized Internal and External Speculation](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.685.pdf)\n- [arxiv'25] [Helix Parallelism: Rethinking Sharding Strategies for Interactive Multi-Million-Token LLM Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07120)\n- [arxiv'25] [Proactive Intra-GPU Disaggregation of Prefill and Decode in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.06608)\n- [arxiv'25] [MIRAGE: KV Cache Optimization through Parameter Remapping for Multi-tenant LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.11507)\n- [CODEML @ ICML'25] [TorchAO: PyTorch-Native Training-to-Serving Model Optimization](https:\u002F\u002Fopenreview.net\u002Fpdf?id=HpqH0JakHf)\n- [arxiv'25] [On Evaluating Performance of LLM Inference Serving Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.09019)\n- [arxiv'25] [PrefillOnly: An Inference Engine for Prefill-only Workloads in Large Language Model Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07203)\n- [ICML'25] [EPIC: Efficient Position-Independent Caching for Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15332)\n- [arxiv'25] [SiPipe: Bridging the CPU-GPU Utilization Gap for Efficient Pipeline-Parallel LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22033)\n- [arxiv'25] [Utility-Driven Speculative Decoding for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.20675)\n- [ATC'25] [DEEPSERVE: Serverless Large Language Model Serving at Scale](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fhu-junhao)\n- [ISCA'25] [WindServe: Efficient Phase-Disaggregated LLM Serving with Stream-based Dynamic Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3730999)\n- [ISCA'25] [Hybe: GPU-NPU Hybrid System for Efficient LLM Inference with Million-Token Context Window](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3731051)\n- [ICLR'25] [TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05076)\n- [arxiv'25] [Cascadia: A Cascade Serving System for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04203)\n- [arxiv'25] [Efficient and Workload-Aware LLM Serving via Runtime Layer Swapping and KV Cache Resizing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.02006)\n- [arxiv'25] [SkyLB: A Locality-Aware Cross-Region Load Balancer for LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24095)\n- [arxiv'25] [EmbAdvisor: Adaptive Cache Management for Sustainable LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23970)\n- [arxiv'25] [SCORPIO: Serving the Right Requests at the Right Time for Heterogeneous SLOs in LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23022)\n- [arxiv'25] [HybridServe: Efficient Serving of Large AI Models with Confidence-Based Cascade Routing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.12566)\n- [arxiv'25] [ServerlessLoRA: Minimizing Latency and Cost in Serverless Inference for LoRA-Based LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14468)\n- [arxiv'25] [TokenWeave: Efficient Compute-Communication Overlap for Distributed LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11329)\n- [arxiv'25] [Tilus: A Virtual Machine for Arbitrary Low-Precision GPGPU Computation in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12984)\n- [OSDI'25] [Clover: Exploiting Intra-device Parallelism for High Throughput Large Language Model Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fzhu-kan)\n- [arxiv'25] [ServeGen: Workload Characterization and Generation of Large Language Model Serving in Production](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09999)\n- [arxiv'25] [ELIS: Efficient LLM Iterative Scheduling System with Response Length Predictor](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.09142)\n- [arxiv'25] [Improving the Serving Performance of Multi-LoRA Large Language Models via Efficient LoRA and KV Cache Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03756)\n- [arxiv'25] [Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.04021)\n- [arxiv'25] [Tempo: Application-aware LLM Serving with Mixed SLO Requirements](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20068)\n- [arxiv'25] [Ascendra: Dynamic Request Prioritization for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20828)\n- [arxiv'25] [Medha: Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17264)\n- [arxiv'25] [Streaming, Fast and Slow: Cognitive Load-Aware Streaming for Efficient LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17999)\n- [arxiv'25] [Bullet: Boosting GPU Utilization for LLM Serving via Dynamic Spatial-Temporal Orchestration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19516)\n- [Survey :mag:] [arxiv'25] [Taming the Titans: A Survey of Efficient LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19720)\n- [MLSys'25] [SOLA: Optimizing SLO Attainment for Large Language Model Serving with State-Aware Scheduling](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3231)\n- [MLSys'25] [Marconi: Prefix Caching for the Era of Hybrid LLMs](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3260)\n- [arxiv'25] [PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18583)\n- [arxiv'25] [Circinus: Efficient Query Planner for Compound ML Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16397)\n- [arxiv'25] [HPU: High-Bandwidth Processing Unit for Scalable, Cost-effective LLM Inference via GPU Co-processing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16112)\n- [Mobicom'25] [D2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15299)\n- [arxiv'25] [SeaLLM: Service-Aware and Latency-Optimized Resource Sharing for Large Language Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15720)\n- [arxiv'25] [gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14775)\n- [arxiv'25] [Optimizing SLO-oriented LLM Serving with PD-Multiplexing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14489)\n- [arxiv'25] [SLO-Aware Scheduling for Large Language Model Inferences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14966)\n- [arxiv'25] [Cost-Efficient LLM Serving in the Cloud: VM Selection with KV Cache Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11816)\n- [ISPASS'25] [Characterizing and Optimizing LLM Inference Workloads on CPU-GPU Coupled Architectures](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11750)\n- [arxiv'25] [HELIOS: Adaptive Model And Early-Exit Selection for Efficient LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.10724)\n- [arxiv'25] [DynaServe: Unified and Elastic Tandem-Style Execution for Dynamic Disaggregated LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09285)\n- [arxiv'25] [Efficient LLM Serving on Hybrid Real-time and Best-effort Requests](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09590)\n- [arxiv'25] [SLOs-Serve: Optimized Serving of Multi-SLO LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.08784)\n- [arxiv'25] [Understanding and Optimizing Multi-Stage AI Inference Pipelines](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09775)\n- [arxiv'24] [Fast and Live Model Auto Scaling with O(1) Host Caching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.17246)\n- [SIGMOD'25] [Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07494)\n- [EuroMLSys'25] [Performance Aware LLM Load Balancer for Mixed Workloads](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3721146.3721947)\n- [MLSys'25] [Rethinking Key-Value Cache Compression Techniques for Large Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.24000)\n- [arxiv'25] [WaferLLM: A Wafer-Scale LLM Inference System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.04563)\n- [HPCA'25] [PAISE: PIM-Accelerated Inference Scheduling Engine for Transformer-based LLM](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10946299)\n- [HPCA'25] throttLL'eM: Predictive GPU Throttling for Energy Efficient LLM Inference Serving\n- [arxiv'25] [Niyama : Breaking the Silos of LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.22562)\n- [ASPLOS'25] [Aqua: Network-Accelerated Memory Offloading for LLMs in Scale-Up GPU Domains](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715983)\n- [ASPLOS'25] [Past-Future Scheduler for LLM Serving under SLA Guarantees](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716011)\n- [ASPLOS'25] [Accelerating LLM Serving for Multi-turn Dialogues with Efficient Resource Management](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716245)\n- [EuroSys'25] [SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3717481)\n- [EuroSys'25] [Multiplexing Dynamic Deep Learning Workloads with SLO-awareness in GPU Clusters](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3696074)\n- [arxiv'25] [Injecting Adrenaline into LLM Serving: Boosting Resource Utilization and Throughput via Attention Disaggregation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20552)\n- [EuroSys'25] [NeuStream: Bridging Deep Learning Serving and Stream Processing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3717489)\n- [SoCC'25] [ModServe: Scalable and Resource-Efficient Large Multimodal Model Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3772052.3772254)\n- [arxiv'25] [PipeBoost: Resilient Pipelined Architecture for Fast Serverless LLM Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.17707)\n- [ISCA'25] [Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18599)\n- [arxiv'25] [Jenga: Effective Memory Management for Serving LLM with Heterogeneity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.18292)\n- [arxiv'25] [AccelGen: Heterogeneous SLO-Guaranteed High-Throughput LLM Inference Serving for Diverse Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.13737)\n- [FAST'25] [Mooncake: Trading More Storage for Less Computation — A KVCache-centric Architecture for Serving LLM Chatbot](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Ffast25\u002Fpresentation\u002Fqin)\n- [arxiv'25] [Collaborative Speculative Inference for Efficient LLM Inference Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10325)\n- [NSDI'25] [SuperServe: Fine-Grained Inference Serving for Unpredictable Workloads](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fkhare)\n- [arxiv'25] [Seesaw: High-throughput LLM Inference via Model Re-sharding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06433)\n- [arxiv'25] [SpecServe: Efficient and SLO-Aware Large Language Model Serving with Adaptive Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05096)\n- [arxiv'25] [ADOR: A Design Exploration Framework for LLM Serving with Enhanced Latency and Throughput](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.04253)\n- [arxiv'25] [Long-Context Inference with Retrieval-Augmented Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20330)\n- [WWW'25] [External Large Foundation Model: How to Efficiently Serve Trillions of Parameters for Online Ads Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.17494)\n- [arxiv'25] [Make LLM Inference Affordable to Everyone: Augmenting GPU Memory with NDP-DIMM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16963)\n- [arxiv'25] [KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.16002)\n- [arxiv'25] [Serving Models, Fast and Slow:Optimizing Heterogeneous LLM Inferencing Workloads at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14617)\n- [arxiv'25] [LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14866)\n- [arxiv'25] [HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12574)\n- [arxiv'25] [Autellix: An Efficient Serving Engine for LLM Agents as General Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13965)\n- [MLSys'25] [ThunderServe: High-performance and Cost-efficient LLM Serving in Cloud Environments](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09334)\n- [ICLR'25] [HexGen-2: Disaggregated Generative Inference of LLMs in Heterogeneous Environment](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.07903)\n- [arxiv'25] [Memory Offloading for Large Language Model Inference with Latency SLO Guarantees](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.08182)\n- [EuroSys'25] [SkyServe: Serving AI Models across Regions and Clouds with Spot Instances](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01438)\n- [ASPLOS'25] [Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707215)\n- [ASPLOS'25] [Dilu: Enabling GPU Resourcing-on-Demand for Serverless DL Serving via Introspective Elasticity](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707251)\n- [arxiv'25] [MPIC: Position-Independent Multimodal Context Caching System for Efficient MLLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01960)\n- [arxiv'25] [Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00722)\n- [arxiv'25] [Towards Efficient Large Multimodal Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00937)\n- [arxiv'25] [HeteroLLM: Accelerating Large Language Model Inference on Mobile SoCs platform with Heterogeneous AI Accelerators](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14794)\n- [arxiv'25] [HyGen: Efficient LLM Serving via Elastic Online-Offline Request Co-location](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14808)\n- [arxiv'25] [Locality-aware Fair Scheduling in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.14312)\n- [arxiv'25] [DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05215)\n- [arxiv'25] [DeepFlow: Serverless Large Language Model Serving at Scale](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2501.14417)\n- [arxiv'25] [iServe: An Intent-based Serving System for LLMs](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2501.13111)\n- [arxiv'25] [AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12162v1)\n- [arxiv'25] [EchoLM: Accelerating LLM Serving with Real-time Knowledge Distillation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12689)\n- [arxiv'25] [OMEGA: A Low-Latency GNN Serving System for Large Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08547)\n- [arxiv'25] [PRESERVE: Prefetching Model Weights and KV-Cache in Distributed LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08192)\n- [arxiv'25] [Hierarchical Autoscaling for Large Language Model Serving with Chiron](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08090)\n- [arxiv'25] [Mell: Memory-Efficient Large Language Model Serving via Multi-GPU KV Cache Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.06709)\n- [arxiv'25] [Accelerated Diffusion Models via Speculative Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05370)\n- [MLSys'25] [FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving](https:\u002F\u002Fopenreview.net\u002Fforum?id=RXPofAsL8F)\n- [EuroSys'25] [A House United Within Itself: SLO-Awareness for On-Premises Containerized ML Inference Clusters via Faro](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.19488)\n- [arxiv'24] [LLM Inference Unveiled: Survey and Roofline Model Insights](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.16363)\n- [arxiv'24] [Efficiently Serving LLM Reasoning Programs with Certaindex](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20993)\n- [arxiv'24] [LoL-PIM: Long-Context LLM Decoding with Scalable DRAM-PIM System](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20166)\n- [arxiv'24] [TimelyLLM: Segmented LLM Serving System for Time-sensitive Robotic Applications](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18695)\n- [arxiv'24] [Dovetail: A CPU\u002FGPU Heterogeneous Speculative Decoding for LLM inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18934)\n- [arxiv'24] [KunServe: Elastic and Efficient Large Language Model Serving with Parameter-centric Memory Management](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18169)\n- [arxiv'24] [Tackling the Dynamicity in a Production LLM Serving System with SOTA Optimizations via Hybrid Prefill\u002FDecode\u002FVerify Scheduling on Efficient Meta-kernels](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.18106)\n- [arxiv'24] [SYMPHONY: Improving Memory Management for LLM Inference Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.16434)\n- [arxiv'24] [A System for Microserving of LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.12488)\n- [arxiv'24] [HashAttention: Semantic Sparsity for Faster Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14468)\n- [arxiv'24] [SpecExec: Massively Parallel Speculative Decoding for Interactive LLM Inference on Consumer Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02532)\n- [arxiv'24] [Unifying KV Cache Compression for Large Language Models with LeanKV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.03131)\n- [arxiv'24] [PREBA: A Hardware\u002FSoftware Co-Design for Multi-Instance GPU based AI Inference Servers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19114)\n- [Survey :mag:] [ACM CSUR'24] [Resource-efficient Algorithms and Systems of Foundation Models: A Survey](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3706418)\n- [arxiv'24] [BlendServe: Optimizing Offline Inference for Auto-regressive Large Models with Resource-aware Batching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.16102)\n- [ICML'25] [SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10958) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention)]\n- [ICLR'25] [SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.02367) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention)]\n- [ICML'25] [SpargeAttention: Accurate and Training-free Sparse Attention Accelerating Any Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.18137) [[Code](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSpargeAttn)]\n- [arxiv'24] [Optimizing Speculative Decoding for Serving Large Language Models Using Goodput](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.14066)\n- [ACL'24] [LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16710)\n- [ACL'24] [SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15030)\n- [arxiv'24] [EcoServe: Maximizing Multi-Resource Utilization with SLO Guarantees in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06364)\n- [IPDPS'24] [Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08383)\n- [arxiv'24] [EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12247)\n- [NeurIPS'24] [Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting](https:\u002F\u002Fopenreview.net\u002Fforum?id=lT3oc04mDp)\n- [NeurIPS'24] [Toward Efficient Inference for Mixture of Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=stXtBqyTWX)\n- [NeurIPS'24] [Sequoia: Scalable and Robust Speculative Decoding](https:\u002F\u002Fopenreview.net\u002Fforum?id=rk2L9YGDi2)\n- [arxiv'24] [Lynx: Enabling Efficient MoE Inference through Dynamic Batch-Aware Expert Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08982)\n- [SC'24] [PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.11798)\n- [SC'24] [SMIless: Serving DAG-based Inference with Dynamic Invocations under Serverless Computing](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100a590\u002F21HUVxvcnoA)\n- [arxiv'24] [SuffixDecoding: A Model-Free Approach to Speeding Up Large Language Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.04975)\n- [arxiv'24] [V-LoRA: An Efficient and Flexible System Boosts Vision Applications with LoRA LMM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.00915)\n- [SenSys'24] [LiteMoE: Customizing On-device LLM Serving via Proxy Submodel Tuning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3666025.3699355)\n- [arxiv'24] [HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01433)\n- [arxiv'24] [NEO: Saving GPU Memory Crisis with CPU Offloading for Online LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01142)\n- [MICRO'24] [Pushing the Performance Envelope of DNN-based Recommendation Systems Inference on GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22249)\n- [arxiv'24] [VL-Cache: Sparsity and Modality-Aware KV Cache Compression for Vision-Language Model Inference Acceleration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.23317)\n- [arxiv'24] [ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21465)\n- [arxiv'24] [Is the GPU Half-Empty or Half-Full? Practical Scheduling Techniques for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17840)\n- [arxiv'24] [POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18038)\n- [PML4LRS @ ICLR2024] [Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07033)\n- [arxiv'24] [MagicPIG: LSH Sampling for Efficient LLM Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.16179)\n- [arxiv'24] [Revisiting SLO and Goodput Metrics in LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.14257)\n- [arxiv'24] [EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15332v1)\n- [arxiv'24] [ParallelSpec: Parallel Drafter for Efficient Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05589)\n- [EuroSys'25] [Fast State Restoration in LLM Serving with HCache](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05004)\n- [arxiv'24] [SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03960)\n- [arxiv'24] [vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04437)\n- [arxiv'24] [Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17264)\n- [arxiv'24] [CSPS: A Communication-Efficient Sequence-Parallelism based Serving System for Transformer based Models with Long Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15104)\n- [arxiv'24] [DynamoLLM: Designing LLM Inference Clusters for Performance and Energy Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00741)\n- [HPCA'24] [KRISP: Enabling Kernel-wise RIght-sizing for Spatial Partitioned GPU Inference Servers](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10071121)\n- [arxiv'24] [Missile: Fine-Grained, Hardware-Level GPU Resource Isolation for Multi-Tenant DNN Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.13996)\n- [NeurIPS'24] [Efficient LLM Scheduling by Learning to Rank](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.15792)\n- [arxiv'24] [P\u002FD-Serve: Serving Disaggregated Large Language Model at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.08147)\n- [arxiv'24] [MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.11743)\n- [SOSP'24] [PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695964)\n- [SOSP'24] [LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695948)\n- [SOSP'24] [Improving DNN Inference Throughput Using Practical, Per-Input Compute Adaptation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695978)\n- [SOSP'24] [Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05385)\n- [arxiv'24] [LLMServingSim: A HW\u002FSW Co-Simulation Infrastructure for LLM Inference Serving at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.05499v1)\n- [ICPP'24] [GMM: An Efficient GPU Memory Management-based Model Serving System for Multiple DNN Inference Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673122)\n- [SIGCOMM'24] [CacheGen: KV Cache Compression and Streaming for Fast Large Language Model Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672274)\n- [ES-FoMO @ ICML'24] [CO2: Precise Attention Score Observation for improving KV Cache Replacement in Large Language Models](https:\u002F\u002Fopenreview.net\u002Fpdf?id=02zPmtcZa0)\n- [OSDI'24] [dLoRA: Dynamically Orchestrating Requests and Adapters for LoRA LLM Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fwu-bingyang)\n- [OSDI'24] [Parrot: Efficient Serving of LLM-based Applications with Semantic Variable](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Flin-chaofan)\n- [OSDI'24] [USHER: Holistic Interference Avoidance for Resource Optimized ML Inference](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fshubha)\n- [OSDI'24] [Fairness in Serving Large Language Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fsheng)\n- [OSDI'24] [MonoNN: Enabling a New Monolithic Optimization Space for Neural Network Inference Tasks on Modern GPU-Centric Architectures](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fzhuang)\n- [OSDI'24] [Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fagrawal)\n- [OSDI'24] [ServerlessLLM: Low-Latency Serverless Inference for Large Language Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Ffu)\n- [OSDI'24] [InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Flee)\n- [OSDI'24] [Llumnix: Dynamic Scheduling for Large Language Model Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fsun-biao)\n- [OSDI'24] [DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fzhong-yinmin)\n- [ATC'24] [Power-aware Deep Learning Model Serving with μ-Serve](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fqiu)\n- [ATC'24] [Fast Inference for Probabilistic Graphical Models](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fjiang)\n- [ATC'24] [Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fgao-bin-cost)\n- [ATC'24] [PUZZLE: Efficiently Aligning Large Language Models through Light-Weight Context Switch](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Flei)\n- [ATC'24] [Quant-LLM: Accelerating the Serving of Large Language Models via FP6-Centric Algorithm-System Co-Design on Modern GPUs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fxia)\n- [TPDS'24] [ElasticBatch: A Learning-Augmented Elastic Scheduling System for Batch Inference on MIG](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10605084)\n- [Survey :mag:] [arxiv'24] [LLM Inference Serving: Survey of Recent Advances and Opportunities](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.12391)\n- [arxiv'24] [Metron: Holistic Performance Evaluation Framework for LLM Inference Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.07000)\n- [arxiv'24] [Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2407.00066)\n- [arxiv'24] [One Queue Is All You Need: Resolving Head-of-Line Blocking in Large Language Model Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00047)\n- [OSDI'24] [Parrot: Efficient Serving of LLM-based Applications with Semantic Variable](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.19888)\n- [arxiv'24] [MemServe: Context Caching for Disaggregated LLM Serving with Elastic Memory Pool](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17565)\n- [ISCA'24] [ElasticRec: A Microservice-based Model Serving Architecture Enabling Elastic Resource Scaling for Recommendation Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06955v1)\n- [ISCA'24] [Splitwise: Efficient generative LLM inference using phase splitting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18677)\n- [ICML'24] [Break the Sequential Dependency of LLM Inference Using Lookahead Decoding](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Ffu24a.html)\n- [ICML'24] [Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads](https:\u002F\u002Fopenreview.net\u002Fforum?id=PEpbUobfJv)\n- [ICML'24] [HexGen: Generative Inference of Large Language Model over Heterogeneous Environment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.11514)\n- [ICML'24] [EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty](https:\u002F\u002Fopenreview.net\u002Fforum?id=1NdN7eXyb4)\n- [ICML'24] [MuxServe: Flexible Spatial-Temporal Multiplexing for Multiple LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02015)\n- [HPCA'24] [An LPDDR-based CXL-PNM Platform for TCO-efficient Inference of Transformer-based Large Language Models](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10476443)\n- [MobiSys'24] [ARISE: High-Capacity AR Offloading Inference Serving via Proactive Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3643832.3661894)\n- [MobiSys'24] [Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3643832.3661878)\n- [arxiv'24] [Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.18628)\n- [arxiv'24] [HawkVision: Low-Latency Modeless Edge AI Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.19213)\n- [MLSys'24] [HeteGen: Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F5431dca75a8d2abc1fb51e89e8324f10-Paper-Conference.pdf)\n- [MLSys'24] [S-LoRA: Serving Thousands of Concurrent LoRA Adapters](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F906419cd502575b617cc489a1a696a67-Paper-Conference.pdf)\n- [MLSys'24] [Vidur: A Large-Scale Simulation Framework For LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05465)\n- [arxiv'24] [The CAP Principle for LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11299)\n- [WWW'24] [λGrapher: A Resource-Efficient Serverless System for GNN Serving through Graph Sharing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3589334.3645383)\n- [ICML'24] [CLLMs: Consistency Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00835)\n- [arxiv'24] [BlockLLM: Multi-tenant Finer-grained Serving for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18322)\n- [EuroSys'24] [Model Selection for Latency-Critical Inference Serving](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3629565)\n- [arxiv'24] [Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14527)\n- [arxiv'24] [Learn To be Efficient: Build Structured Sparsity in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06126)\n- [arxiv'24] [Sponge: Inference Serving with Dynamic SLOs Using In-Place Vertical Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.00704v1)\n- [ISCA'24] [Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12066)\n- [arxiv'24] [Minions: Accelerating Large Language Model Inference with Adaptive and Collective Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.15678)\n- [arxiv'24] [ALTO: An Efficient Network Orchestrator for Compound AI Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.04311)\n- [ASPLOS'24] [ExeGPT: Constraint-Aware Resource Scheduling for LLM Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640383)\n- [ASPLOS'24] [NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.00579)\n- [arxiv'24] [ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2403.02352)\n- [arxiv'24] [Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02310)\n- [ICML'24] [DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01876)\n- [ICLR'24] [Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01801)\n- [arxiv'24] [FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18789)\n- [arxiv'24] [Wisdom of Committee: Distilling from Foundation Model to SpecializedApplication Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14035)\n- [arxiv'24] [RelayAttention for Efficient Large Language Model Serving with Long System Prompts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.14808)\n- [arxiv'24] [LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.01136)\n  - [PPoPP'24 poster] [POSTER: LLM-PQ:Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627535.3638480)\n- [NSDI'24] Approximate Caching for Efficiently Serving Diffusion Models\n- [arxiv'24] [APIServe: Efficient API Support for Large-Language Model Inferencing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01869)\n- [arxiv'24] [ServerlessLLM: Locality-Enhanced Serverless Inference for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14351)\n- [arxiv'24] [MoE-Infinity: Activation-Aware Expert Offloading for Efficient MoE Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14361)\n- [arxiv'24] [FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14112)\n- [arxiv'24] [Accelerating Retrieval-Augmented Language Model Serving with Speculation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14021)\n- [arxiv'24] [CaraServe: CPU-Assisted and Rank-Aware LoRA Serving for Generative LLM Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11240)\n- [arxiv'24] [Inference without Interference: Disaggregate LLM Inference for Mixed Downstream Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.11181)\n- [arxiv'24] [DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08671)\n- [Survey :mag:] [arxiv'24] [Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.07851)\n- [arxiv'24] [Learned Best-Effort LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.07886)\n- [arxiv'24] [Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.02669)\n- [VLDB'24] [Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol17\u002Fp211-xia.pdf)\n- [ASPLOS'24] [SpotServe: Serving Generative Large Language Models on Preemptible Instances](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15566)\n- [ASPLOS'24] [SpecInfer: Accelerating Generative Large Language Model Serving with Speculative Inference and Token Tree Verification](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651335)\n- [arxiv'23] [DeltaZip: Multi-Tenant Language Model Serving via Delta Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05215)\n- [EMNLP'23] [Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding](https:\u002F\u002Faclanthology.org\u002F2023.emnlp-main.362\u002F)\n- [arxiv'23] [Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.08168)\n- [arxiv'23] [Fairness in Serving Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00588)\n- [arxiv'23] [Moirai: Towards Optimal Placement for Distributed Inference on Heterogeneous Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04025)\n- [arxiv'23] [Punica: Multi-Tenant LoRA Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18547)\n- [arxiv'23] [Pipeline Parallelism for DNN Inference with Practical Performance Guarantees](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03703)\n- [arxiv'23] [SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.16369)\n- [arxiv'23] High-throughput Generative Inference of Large Language Models with a Single GPU\n- [NeurIPS'23] [SpecTr: Fast Speculative Decoding via Optimal Transport](https:\u002F\u002Fopenreview.net\u002Fforum?id=SdYHLTCC5J)\n- [HPDC'23] Kairos: Building Cost-Efficient Machine Learning Inference Systems with Heterogeneous Cloud Resources\n- [SOSP'23] Paella: Low-latency Model Serving with Virtualized GPU Scheduling\n- [SOSP'23] [Efficient Memory Management for Large Language Model Serving with PagedAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06180)\n- [MLSys'23] [Efficiently Scaling Transformer Inference](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F523f87e9d08e6071a3bbd150e6da40fb-Abstract-mlsys2023.html)\n- [EuroSys'23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access\n- [EuroSys'23] Tabi: An Efficient Multi-Level Inference System for Large Language Models\n- [EuroSys'23] Pocket: ML Serving from the Edge\n- [OSDI'23] AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving\n- [NSDI'23] SHEPHERD: Serving DNNs in the Wild\n- [VLDB'23] Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures\n- [ICML'23] [Fast Inference from Transformers via Speculative Decoding](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fleviathan23a.html)\n- [SIGMOD'22] Serverless Data Science - Are We There Yet? A Case Study of Model Serving\n- [OSDI'22] Orca: A Distributed Serving System for Transformer-Based Generative Models\n- [OSDI'22] Microsecond-scale Preemption for Concurrent GPU-accelerated DNN Inferences\n- [ATC'22] [SOTER: Guarding Black-box Inference for General Neural Networks at the Edge](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc22\u002Fpresentation\u002Fshen)\n- [ATC'22] Serving Heterogeneous Machine Learning Models on Multi-GPU Servers with Spatio-Temporal Sharing\n- [ATC'22] Tetris: Memory-efficient Serverless Inference through Tensor Sharing\n- [ATC'22] PetS: A Unified Framework for Parameter-Efficient Transformers Serving\n- [ATC'21] INFaaS: Automated Model-less Inference Serving\n- [SoCC'21] Morphling: Fast, Near-Optimal Auto-Configuration for Cloud-Native Model Serving\n- [arxiv'21] Supporting Massive DLRM Inference through Software Defined Memory\n- [MobiCom'20] SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud\n\n## 注意力优化\n- [PPOPP'26] [MetaAttention：跨硬件后端的统一高效注意力框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786444)\n- [PPoPP'26] [FlashAttention-T：利用张量-向量并行性实现完全张量化的注意力](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786425)\n- [arxiv'25] [BLASST：通过 Softmax 阈值化实现动态块状注意力稀疏化](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2512.12087)\n- [SC'25] [UltraAttn：通过层次化上下文分块高效并行化注意力](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759894)\n- [SC'25] [RingX：面向 HPC 的长上下文学习的可扩展并行注意力](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759859)\n- [NeurIPS'25] [Twilight：基于层次化 Top-p 剪枝的自适应注意力稀疏化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02770)\n- [NeurIPS'25 Spotlight] [SageAttention3：用于推理的 FP4 微尺度注意力，以及 8 位训练的探索](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11594) [[代码](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSageAttention)]\n- [arxiv'25] [SLA：通过可微调的稀疏线性注意力超越扩散 Transformer 中的稀疏性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.24006) [[代码](https:\u002F\u002Fgithub.com\u002Fthu-ml\u002FSLA)]\n- [MLSys'25] [FastTree：针对树形 LLM 推理优化注意力核与运行时](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3278)\n- [MLSys'25] [FlashInfer：面向 LLM 推理服务的高效且可定制的注意力引擎](https:\u002F\u002Fopenreview.net\u002Fforum?id=RXPofAsL8F)\n- [NeurIPS'24] [FlashAttention-3：异步与低精度下的快速准确注意力](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F7ede97c3e082c6df10a8d6103a2eebd2-Abstract-Conference.html)\n- [ICLR'24] [FlashAttention-2：更高效的并行与工作划分下的更快注意力](https:\u002F\u002Fopenreview.net\u002Fforum?id=mZn2Xyh9Ec)\n- [NeurIPS'22] [FlashAttention：IO 友好、快速且内存高效的精确注意力](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html)\n\n## 混合专家模型 (MoE)\n- [arxiv'26] [PROBE：通过实时预测式预取实现 MoE 推理中计算与通信的协同均衡](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.00509)\n- [arxiv'26] [动态专家共享：在混合专家扩散 LLM 中解耦内存与并行性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.00879)\n- [arxiv'26] [LatentMoE：迈向混合专家模型中每 FLOP 和参数的最佳精度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.18089)\n- [arxiv'26] [负载最轻专家并行：平衡不均衡的混合专家模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17111)\n- [arxiv'26] [MixServe：基于融合通信算法的混合并行分布式 MoE 模型自动服务系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.08800)\n- [arxiv'26] [MoE-DisCo：低成本经济型混合专家模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.06857)\n- [arxiv'26] [MoEBlaze：突破现代 GPU 上高效 MoE 训练的内存墙](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05296)\n- [arxiv'26] [借助 Tarragon 提升基于 MoE 的 LLM 推理的鲁棒性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01310)\n- [EuroSys'26] 通过细粒度专家卸载缓解基于 MoE 的 LLM 服务中的延迟-内存权衡\n- [EuroSys'26] [MegaScale-MoE：生产环境中大规模、通信高效的混合专家模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11432)\n\n- [PACT'25] [ScaleMoE: A Fast and Scalable Distributed Training Framework for Large-Scale Mixture-of-Experts Models](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F11282919)\n- [arxiv'25] [FUSCO: High-Performance Distributed Data Shuffling via Transformation-Communication Fusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22036)\n- [arxiv'25] [Efficient MoE Inference with Fine-Grained Scheduling of Disaggregated Expert Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.21487)\n- [arxiv'25] [UCCL-EP: Portable Expert-Parallel Communication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19849)\n- [arxiv'25] [Remoe: Towards Efficient and Low-Cost MoE Inference in Serverless Computing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18674)\n- [arxiv'25] [Efficient Mixture-of-Agents Serving via Tree-Structured Routing, Adaptive Pruning, and Dependency-Aware Prefill-Decode Overlap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18126)\n- [arxiv'25] [SonicMoE: Accelerating MoE with IO and Tile-aware Optimizations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14080)\n- [arxiv'25] [Janus: Disaggregating Attention and Experts for Scalable MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.13525)\n- [arxiv'25] [Efficient MoE Serving in the Memory-Bound Regime: Balance Activated Experts, Not Tokens](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.09277)\n- [arxiv'25] [Context-Aware Mixture-of-Experts Inference on CXL-Enabled GPU-NDP Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04476)\n- [arxiv'25] [MicroMoE: Fine-Grained Load Balancing for Mixture-of-Experts with Token Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16947)\n- [arxiv'25] [MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15690)\n- [arxiv'25] [MoE-SpeQ: Speculative Quantized Decoding with Proactive Expert Prefetching and Offloading for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14102)\n- [arxiv'25] [Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10676)\n- [arxiv'25] [FarSkip-Collective: Unhobbling Blocking Communication in Mixture of Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11505)\n- [arxiv'25] [DualSparse-MoE: Coordinating Tensor\u002FNeuron-Level Sparsity with Expert Partition and Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18376)\n- [arxiv'25] [BuddyMoE: Exploiting Expert Redundancy to Accelerate Memory-Constrained Mixture-of-Experts Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10054)\n- [SC'25] [Diff-MoE: Efficient Batched MoE Inference with Priority-Driven Differential Expert Caching](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759903)\n- [arxiv'25] [PuzzleMoE: Efficient Compression of Large Mixture-of-Experts Models via Sparse Expert Merging and Bit-packed inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.04805)\n- [SC workshop'25] [Compression Error Sensitivity Analysis for Different Experts in MoE Model Inference](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767377)\n- [SC workshop'25] [Batch Tiling on Attention: Efficient Mixture of Experts Training on Wafer-Scale Processors](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767407)\n- [arxiv'25] [Opportunistic Expert Activation: Batch-Aware Expert Routing for Faster Decode Without Retraining](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02237)\n- [arxiv'25] [HybridEP: Scaling Expert Parallelism to Cross-Datacenter Scenario via Hybrid Expert\u002FData Transmission](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19470)\n- [arxiv'25] [MoE-Prism: Disentangling Monolithic Experts for Elastic MoE Services via Model-System Co-Designs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19366)\n- [arxiv'25] [ReXMoE: Reusing Experts with Minimal Overhead in Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.17483)\n- [arxiv'25] [MergeMoE: Efficient Compression of MoE Models via Expert Output Merging](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14436)\n- [MICRO'25] [Optimizing All-to-All Collective Communication with Fault Tolerance on Torus Networks](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756057)\n- [arxiv'25] [GatePro: Parameter-Free Expert Selection Optimization for Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.13079)\n- [MICRO'25] [Stratum: System-Hardware Co-Design with Tiered Monolithic 3D-Stackable DRAM for Efficient MoE Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05245)\n- [arxiv'25] [Orders in Chaos: Enhancing Large-Scale MoE LLM Serving with Data Movement Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05497)\n- [arxiv'25] [ElasticMoE: An Efficient Auto Scaling Method for Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02613)\n- [SOSP'25] [KTransformers: Unleashing the Full Potential of CPU\u002FGPU Hybrid Inference for MoE Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764843)\n- [arxiv'25] [GRACE-MoE: Grouping and Replication with Locality-Aware Routing for Efficient Distributed MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25041)\n- [arxiv'25] [MoEs Are Stronger than You Think: Hyper-Parallel Inference Scaling with RoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.17238)\n- [arxiv'25] [DiEP: Adaptive Mixture-of-Experts Compression through Differentiable Expert Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.16105)\n- [arxiv'25] [Symphony-MoE: Harmonizing Disparate Pre-trained Models into a Coherent Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18542)\n- [NeurIPS'25] [BrainMoE: Cognition Joint Embedding via Mixture-of-Expert Towards Robust Brain Foundation Model](https:\u002F\u002Fopenreview.net\u002Fpdf?id=05cVmYJJnb)\n- [NeurIPS'25] [S’MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=LbNL8xGai2)\n- [NeurIPS'25] [The Omni-Expert: A Computationally Efficient Approach to Achieve a Mixture of Experts in a Single Expert Model](https:\u002F\u002Fopenreview.net\u002Fpdf?id=mVRphqQKnb)\n- [NeurIPS'25] [MoESD: Unveil Speculative Decoding's Potential for Accelerating Sparse MoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19645)\n- [NeurIPS'25] [FlyLoRA: Boosting Task Decoupling and Parameter Efficiency via Implicit Rank-Wise Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08396)\n- [NeurIPS'25] [FlowMoE: A Scalable Pipeline Scheduling Framework for Distributed Mixture-of-Experts Training](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2025\u002Fposter\u002F118234)\n- [NeurIPS'25] [FlashMoE: Fast Distributed MoE in a Single Kernel](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04667) [[Code](https:\u002F\u002Fgithub.com\u002Fosayamenja\u002FFlashMoE)]\n- [arxiv'25] [Steering MoE LLMs via Expert (De)Activation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.09660)\n- [arxiv'25] [HD-MoE: Hybrid and Dynamic Parallelism for Mixture-of-Expert LLMs with 3D Near-Memory Processing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.09420)\n- [arxiv'25] [LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02753)\n- [SC'25] [MoE-Compression: How the Compression Error of Experts Affects the Inference Accuracy of MoE Model?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07727)\n- [arxiv'25] [LExI: Layer-Adaptive Active Experts for Efficient MoE Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02753)\n- [arxiv'25] [LongCat-Flash Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.01322)\n- [arxiv'25] [Accelerating Mixture-of-Experts Inference by Hiding Offloading Latency with Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.21706)\n- [arxiv'25] [HAP: Hybrid Adaptive Parallelism for Efficient Mixture-of-Experts Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19373)\n- [arxiv'25] [MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17467)\n- [SIGCOMM'25] [MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.02263)\n- [ICLR'25] [Ada-K Routing: Boosting the Efficiency of MoE-based LLMs](https:\u002F\u002Fopenreview.net\u002Fforum?id=9CqkpQExe2)\n- [arxiv'25] [Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05567)\n- [ICML'25] [Oracle-MoE: Locality-preserving Routing in the Oracle Space for Memory-constrained Large Language Model Inference](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2025\u002Fposter\u002F43606)\n- [ICML'25] [I2MoE: Interpretable Multimodal Interaction-aware Mixture-of-Experts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=EuJaF5QsMP)\n- [arxiv'25] [Chain-of-Experts: Unlocking the Communication Power of Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.18945)\n- [SC'25] [X-MoE: Enabling Scalable Training for Emerging Mixture-of-Experts Architectures on HPC Platforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.13337)\n- [SIGCOMM'25] [MixNet: A Runtime Reconfigurable Optical-Electrical Fabric for Distributed Mixture-of-Experts Training](https:\u002F\u002Fxcwanandy.github.io\u002Fpapers\u002F2025\u002Fmixnet-sigcomm25.pdf)\n- [ATC'25] [PopFetcher: Towards Accelerated Mixture-of-Experts Training Via Popularity Based Expert-Wise Prefetch](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fzhang-junyi)\n- [arxiv'25] [HierMoE: Accelerating MoE Training with Hierarchical Token Deduplication and Expert Swap](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09591)\n- [arxiv'25] [PiKV: KV Cache Management System for Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.06526)\n- [arxiv'25] [BrownoutServe: SLO-Aware Inference Serving under Bursty Workloads for MoE-based LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.17133)\n- [arxiv'25] [Towards Greater Leverage: Scaling Laws for Efficient Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.17702)\n- [ACL'25] [EAC-MoE: Expert-Selection Aware Compressor for Mixture-of-Experts Large Language Models](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.633.pdf)\n- [ACL'25] [FOLDMOE: Efficient Long Sequence MoE Training via Attention-MoE Pipelining](https:\u002F\u002Faclanthology.org\u002F2025.acl-long.186.pdf)\n- [arxiv'25] [The New LLM Bottleneck: A Systems Perspective on Latent Attention and Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.15465)\n- [arxiv'25] [Muon is Scalable for LLM Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16982v1)\n- [arxiv'25] [Long-Tailed Distribution-Aware Router For Mixture-of-Experts in Large Vision-Language Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.01351)\n- [arxiv'25] [Sub-MoE: Efficient Mixture-of-Expert LLMs Compression via Subspace Expert Merging](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.23266)\n- [arxiv'25] [HarMoEny: Efficient Multi-GPU Inference of MoE Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.12417)\n- [arxiv'25] [Load Balancing Mixture of Experts with Similarity Preserving Routers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.14038)\n- [arxiv'25] [MoE-GPS: Guidlines for Prediction Strategy for Dynamic Expert Duplication in MoE Load Balancing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.07366)\n- [arxiv'25] [EvoMoE: Expert Evolution in Mixture of Experts for Multimodal Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23830)\n- [arxiv'25] [CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17553)\n- [arxiv'25] [PreMoe: Lightening MoEs on Constrained Memory by Expert Pruning and Retrieval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17639)\n- [arxiv'25] [Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16056)\n- [ATC'25] [PopFetcher: Towards Accelerated Mixture-of-Experts Training Via Popularity Based Expert-Wise Prefetch](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fzhang-junyi)\n- [arxiv'25] [Toward Cost-Efficient Serving of Mixture-of-Experts with Asynchrony](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.08944)\n- [ICML'25] [FloE: On-the-Fly MoE Inference on Memory-constrained GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05950)\n- [arxiv'25] [PT-MoE: An Efficient Finetuning Framework for Integrating Mixture-of-Experts into Prompt Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09519)\n- [arxiv'25] [MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03804)\n- [arxiv'25] [Faster MoE LLM Inference for Extremely Large Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.03531)\n- [arxiv'25] [Accelerating Mixture-of-Experts Training with Adaptive Expert Replication](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19925)\n- [NAACL'25] [MoLA: MoE LoRA with Layer-wise Expert Allocation](https:\u002F\u002Faclanthology.org\u002F2025.findings-naacl.284\u002F)\n- [NAACL'25] [Marrying LLMs with Dynamic Forecasting: A Graph Mixture-of-expert Perspective](https:\u002F\u002Faclanthology.org\u002F2025.findings-naacl.24.pdf)\n- [NAACL'25] [Sparser Mixture-of-Adapters with Cross-Layer Generalization](https:\u002F\u002Faclanthology.org\u002F2025.naacl-long.201\u002F)\n- [NAACL'25] [SimSMoE: Toward Efficient Training Mixture of Experts via Solving Representational Collapse](https:\u002F\u002Faclanthology.org\u002F2025.findings-naacl.107\u002F)\n- [Mobicom'25] [D2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15299)\n- [arxiv'25] [MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2504.14960)\n- [arxiv'25] [MoE-Gen: High-Throughput MoE Inference on a Single GPU with Module-Based Batching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09716)\n- [arxiv'25] [Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12370)\n- [arxiv'25] [Unveiling Hidden Collaboration within Mixture-of-Experts in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12359)\n- [arxiv'25] [Dense Backpropagation Improves Training for Sparse Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12463)\n- [arxiv'25] [MoE-Lens: Towards the Hardware Limit of High-Throughput MoE LLM Serving Under Resource Constraints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09345)\n- [arxiv'25] [C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07964)\n- [arxiv'25] [Cluster-Driven Expert Pruning for Mixture-of-Experts Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07807)\n- [arxiv'25] [S'MoRE: Structural Mixture of Residual Experts for LLM Fine-tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06426)\n- [DAC'25] [HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05897)\n- [arxiv'25] [Finding Fantastic Experts in MoEs: A Unified Study for Expert Dropping Strategies and Observations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.05586)\n- [arxiv'25] [HeterMoE: Efficient Training of Mixture-of-Experts Models on Heterogeneous GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03871)\n- [TKDE'25] [A Survey on Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.06204)\n- [ICLR'25] [NetMoE: Accelerating MoE Training through Dynamic Sample Placement](https:\u002F\u002Fopenreview.net\u002Fforum?id=1qP3lsatCR)\n- [arxiv'25] [ProMoE: Fast MoE-based LLM Serving using Proactive Caching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.22134)\n- [arxiv'25] [Mixture of Lookup Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.15798v1)\n- [EuroSys'25] [Samoyeds: Accelerating MoE Models with Structured Sparsity Leveraging Sparse Tensor Cores](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10725)\n- [EuroMLSys'25] [Priority-Aware Preemptive Scheduling for Mixed-Priority Workloads in MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09304)\n- [EuroMLSys'25] [Accelerating MoE Model Inference with Expert Sharding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.08467)\n- [arxiv'25] [eMoE: Task-aware Memory Efficient Mixture-of-Experts-Based (MoE) Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06823)\n- [KDD'25] [ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.06881)\n- [arxiv'25] [Continual Pre-training of MoEs: How robust is your router?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05029)\n- [arxiv'25] [Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05139)\n- [arxiv'25] [Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.05066)\n- [arxiv'25] [Speculative MoE: Communication Efficient Parallel MoE Inference with Speculative Token and Expert Pre-scheduling](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2503.04398)\n- [MLSys'25] [Comet: Fine-grained Computation-communication Overlapping for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19811v3)\n- [arxiv'25] [CoSMoEs: Compact Sparse Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.00245)\n- [CVPR'25] [DeRS: Towards Extremely Efficient Upcycled Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01359)\n- [ASPLOS'25] [CoServe: Efficient Collaboration-of-Experts (CoE) Model Inference with Limited Memory](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2503.02354)\n- [arxiv'25] [Comet: Fine-grained Computation-communication Overlapping for Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.19811)\n- [arxiv'25] [BigMac: A Communication-Efficient Mixture-of-Experts Model Structure for Fast Training and Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16927)\n- [arxiv'25] [DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12455)\n- [arxiv'25] [Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.12947)\n- [arxiv'25] [MoETuner: Optimized Mixture of Expert Serving with Balanced Expert Placement and Token Routing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06643)\n- [arxiv'25] [Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.05172)\n- [arxiv'25] [Fair-MoE: Fairness-Oriented Mixture of Experts in Vision-Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06094)\n- [arxiv'25] [fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.05370)\n- [TPDS'25] [EfficientMoE: Optimizing Mixture-of-Experts Model Training with Adaptive Load Balance](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fjournal\u002Ftd\u002F5555\u002F01\u002F10876795\u002F247s0GLFJN6)\n- [arxiv'25] [Hecate: Unlocking Efficient Sparse Model Training via Fully Sharded Sparse Data Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02581)\n- [NAACL'25] [MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00997)\n- [arxiv'25] [BTS: Harmonizing Specialized Experts into a Generalist LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00075)\n- [ASPLOS'25] [FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.10714)\n- [arxiv'25] [Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.11873)\n- [arxiv'25] [Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12370)\n- [arxiv'25] [Optimizing Distributed Deployment of Mixture-of-Experts Model Inference in Serverless Computing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.05313)\n- [MICRO'24] [SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10764648)\n- [TPDS'24] [MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10494556)\n  - Journal version of [IPDPS'23] [MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177396)\n- [arxiv'24] [DeepSeek-V3 Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19437)\n- [arxiv'24] [HEXA-MoE: Efficient and Heterogeneous-aware MoE Acceleration with ZERO Computation Redundancy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01288)\n- [arxiv'24] [Communication-Efficient Sparsely-Activated Model Training via Sequence Migration and Token Condensation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15419)\n- [arxiv'24] [Nexus: Specialization meets Adaptability for Efficiently Training Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.15901)\n- [arxiv'24] [ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing](https:\u002F\u002Fopenreview.net\u002Fforum?id=4D0f16Vwc3)\n- [Survey :mag:] [arxiv'24] [A Survey on Inference Optimization Techniques for Mixture of Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14219)\n- [arxiv'24] [DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10302)\n- [arxiv'24] [Llama 3 Meets MoE: Efficient Upcycling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.09952)\n- [arxiv'24] [Sparsing Law: Towards Large Language Models with Greater Activation Sparsity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02335)\n- [arxiv'24] [Mixture of A Million Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04153)\n- [arxiv'24] [MoE-CAP: Cost-Accuracy-Performance Benchmarking for Mixture-of-Experts Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.07067)\n- [arxiv'24] [MoESys: A Distributed and Efficient Mixture-of-Experts Training and Inference System for Internet Services](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10034)\n- [arxiv'24] [Toward Inference-optimal Mixture-of-Expert Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02852)\n- [arxiv'24] [Expert-Token Resonance: Redefining MoE Routing through Affinity-Driven Active Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.00023)\n- [MLArchSys'24 @ ISCA'24] [MoE-ERAS: Expert Residency Aware Selection](https:\u002F\u002Fopenreview.net\u002Fforum?id=o43eHjPEMO)\n- [arxiv'24] [MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.04801)\n- [arxiv'24] [Prediction Is All MoE Needs: Expert Load Distribution Goes from Fluctuating to Stabilizing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.16914)\n- [arxiv'24] [DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04434)\n- [COLM'24] [Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training](https:\u002F\u002Fopenreview.net\u002Fforum?id=LKEJPySnlt)\n- [ME-FoMo @ ICLR'24] [Scaling Laws for Fine-Grained Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07871)\n- [arxiv'24] [UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.18797)\n- [ML for Sys workshop @ NeurIPS'24] [IFMoE: An Inference Framework Design for Fine-grained MoE](https:\u002F\u002Fmlforsystems.org\u002Fassets\u002Fpapers\u002Fneurips2024\u002Fpaper41.pdf)\n- [ML for Sys workshop @ NeurIPS'24] [TurboMoE: Enhancing MoE Model Training with Smart Kernel-Fusion and Data Transformation](https:\u002F\u002Fopenreview.net\u002Fforum?id=huy8g3iKy0)\n- [arxiv'24] [Dense Backpropagation Improves Routing for Sparsely-Gated Mixture-of-Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=huy8g3iKy0)\n- [arxiv'24] [MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11217)\n- [arxiv'24] [Pro-Prophet: Systematic Load Balancing Method for Efficient Parallel Training of Large-scale MoE Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.10003)\n- [EMNLP'24] [MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18035)\n- [EMNLP'24] [Mixture of Diverse Size Experts](https:\u002F\u002Faclanthology.org\u002F2024.emnlp-industry.118\u002F)\n- [EMNLP'24] [AdaMOE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models](https:\u002F\u002Faclanthology.org\u002F2024.findings-emnlp.361.pdf)\n- [ACL'24] [Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models](https:\u002F\u002Faclanthology.org\u002F2024.acl-long.334\u002F)\n- [ACL'24] [SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with Tunable Memory Budget](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15030)\n- [SoCC'24] [MoEsaic: Shared Mixture of Experts](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3698038.3698521)\n- [KDD'24] [Efficient Mixture of Experts based on Large Language Models for Low-Resource Data Preprocessing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3637528.3671873)\n- [arxiv'24] [Pipeline MoE: A Flexible MoE Implementation with Pipeline Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11414)\n- [IPDPS'24] [Exploiting Inter-Layer Expert Affinity for Accelerating Mixture-of-Experts Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08383)\n- [arxiv'24] [EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12247)\n- [arxiv'24] [Shortcut-connected Expert Parallelism for Accelerating Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.05019)\n- [NeurIPS'24] [Toward Efficient Inference for Mixture of Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=stXtBqyTWX)\n- [arxiv'24] [Lynx: Enabling Efficient MoE Inference through Dynamic Batch-Aware Expert Selection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.08982)\n- [MLSys'24] [SiDA: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F698cfaf72a208aef2e78bcac55b74328-Abstract-Conference.html)\n- [SC'24] [APTMoE: Affinity-Aware Pipeline Tuning for MoE Models on Bandwidth-Constrained GPU Nodes](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fsc\u002F2024\u002F529100b436\u002F21HUWvO6IIo)\n- [NeurIPS'24] [GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=QtYg4g3Deu)\n- [arxiv'24] [HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01433)\n- [arxiv'24] [Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02265)\n- [NeurIPS'24] [LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing](https:\u002F\u002Fopenreview.net\u002Fforum?id=bjFhVbky5A)\n- [arxiv'24] [Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02265)\n- [arxiv'24] [Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.11273)\n- [NeurIPS'24] [Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design](https:\u002F\u002Fopenreview.net\u002Fforum?id=i8JaxY7tDI)\n- [arxiv'24] [ExpertFlow: Optimized Expert Activation and Token Allocation for Efficient Mixture-of-Experts Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17954v1)\n- [arxiv'24] [Demystifying the Compression of Mixture-of-Experts Through a Unified Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.02500)\n- [PML4LRS @ ICLR'24] [Fiddler: CPU-GPU Orchestration for Fast Inference of Mixture-of-Experts Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07033)\n- [arxiv'24] [Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17043)\n- [arxiv'24] [MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12013)\n- [arxiv'24] [Duo-LLM: A Framework for Studying Adaptive Computation in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10846)\n- [arxiv'24] [MoH: Multi-Head Attention as Mixture-of-Head Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.11842v1)\n- [arxiv'24] [AT-MoE: Adaptive Task-planning Mixture of Experts via LoRA Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.10896v1)\n- [NeurIPS'24 (Splotlight)] [Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.08245v1)\n- [arxiv'24] [Aria: An Open Multimodal Native Mixture-of-Experts Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.05993)\n- [arxiv'24] [MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.06270)\n- [arxiv'24] [MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07348)\n- [arxiv'24] [Upcycling Large Language Models into Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07524)\n- [arxiv'24] [No Need to Talk: Asynchronous Mixture of Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.03529)\n- [arxiv'24] [Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models with Adaptive Expert Placement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04656)\n- [arxiv'24] [HMoE: Heterogeneous Mixture of Experts for Language Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.10681)\n- [arxiv'24] [FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11304v1)\n- [arxiv'24] [AquilaMoE: Efficient Training for MoE Models with Scale-Up and Scale-Out Strategies](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.06567)\n- [arxiv'24] [Layerwise Recurrent Router for Mixture-of-Experts](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2408.06793)\n- [arxiv'24] [Partial Experts Checkpoint: Efficient Fault Tolerance for Sparse Mixture-of-Experts Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04307)\n- [SRW @ ACL'24] [MoExtend: Tuning New Experts for Modality and Task Extension](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.03511v1)\n- [arxiv'24] [MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.01505)\n- [arxiv'24] [Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00945)\n- [arxiv'24] [Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.12034)\n- [arxiv'24] [Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.06563)\n- [ICML'24] [Scaling Laws for Fine-Grained Mixture of Experts](https:\u002F\u002Fopenreview.net\u002Fpdf?id=yoqdlynCRs)\n- [ICML'24] [Scaling Beyond the GPU Memory Limit for Large Mixture-of-Experts Model Training](https:\u002F\u002Fopenreview.net\u002Fforum?id=uLpyWQPyF9)\n- [MLSys'24] [QMoE: Sub-1-Bit Compression of Trillion-Parameter Models](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002Fc74b624843218d9b6713fcf299d6d5e4-Paper-Conference.pdf)\n- [MLSys'24] [Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F339caf45a6fa281cae8adc6465343464-Paper-Conference.pdf)\n- [arxiv'24] [CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05949)\n- [arxiv'24] [AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.00361)\n- [SIGIR'24] [M3oE: Multi-Domain Multi-Task Mixture-of Experts Recommendation Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.18465)\n- [EuroSys'24] [ScheMoE: An Extensible Mixture-of-Experts Distributed Training System with Tasks Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3650083)\n- [arxiv'24] [MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA based Mixture of Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.15159)\n- [ICLR'24] [Mixture of LoRA Experts](https:\u002F\u002Fopenreview.net\u002Fforum?id=uWvKBCYh4S)\n- [arxiv'24] [Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.07816)\n- [arxiv'24] [MoE-Infinity: Activation-Aware Expert Offloading for Efficient MoE Serving](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14361)\n- [IJCAI'24] [LocMoE: A Low-overhead MoE for Large Language Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.13920)\n- [ISCA'24] [Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12066)\n- [IPDPS'23] [MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10177396)\n- [EMNLP'23] [Adaptive Gating in Mixture-of-Experts based Language Models](https:\u002F\u002Faclanthology.org\u002F2023.emnlp-main.217\u002F)\n- [ACL'23] [AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for Efficient Neural Machine Translation](https:\u002F\u002Faclanthology.org\u002F2023.findings-acl.580\u002F)\n- [ICLR'23] [Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints](https:\u002F\u002Fopenreview.net\u002Fforum?id=T5nUQDrM4u)\n- [ICML'23] [Brainformers: Trading Simplicity for Efficiency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00008)\n- [arxiv'23] [Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06182)\n- [arxiv'23] [Fast Inference of Mixture-of-Experts Language Models with Offloading](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.17238)\n- [ATC'23] [Accelerating Distributed MoE Training and Inference with Lina](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc23\u002Fpresentation\u002Fli-jiamin)\n- [ATC'23] [SmartMoE: Efficiently Training Sparsely-Activated Models through Combining Offline and Online Parallelization](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc23\u002Fpresentation\u002Fzhai)\n- [OSDI'23] [Optimizing Dynamic Neural Networks with Brainstorm](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Fcui)\n- [SIGMOD'23] FlexMoE: Scaling Large-scale Sparse Pre-trained Model Training via Dynamic Device Placement\n- [ICS'23] A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training\n- [MLSys'23] [MegaBlocks: Efficient Sparse Training with Mixture-of-Experts](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F5a54f79333768effe7e8927bcccffe40-Abstract-mlsys2023.html)\n- [MLSys'23] [Tutel: Adaptive Mixture-of-Experts at Scale](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F9412531719be7ccf755c4ff98d0969dc-Abstract-mlsys2023.html)\n- [arxiv'22] [ST-MoE: Designing Stable and Transferable Sparse Expert Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08906)\n- [PPoPP'22] [FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3503221.3508418)\n- [SustaiNLP @ EMNLP'22] [Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.10017)\n- [NeurIPS'22] [Mixture-of-Experts with Expert Choice Routing](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F2f00ecd787b432c1d36f3de9800728eb-Abstract-Conference.html)\n- [ICML'22] [DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05596)\n- [ICML'22] [GLaM: Efficient Scaling of Language Models with Mixture-of-Experts](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fdu22c\u002Fdu22c.pdf)\n- [JMLR'22] [Switch transformers: scaling to trillion parameter models with simple and efficient sparsity](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F3586589.3586709)\n- [EMNLP'21] [Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference](https:\u002F\u002Faclanthology.org\u002F2021.findings-emnlp.304\u002F)\n- [ICLR'17] [Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer](https:\u002F\u002Fopenreview.net\u002Fforum?id=B1ckMDqlg)\n\n## 分布式机器学习的通信优化与网络基础设施\n- [arxiv'26] [HetCCL：利用异构GPU加速LLM训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22585)\n- [PPoPP'26] [COCCL：支持自定义压缩轻松集成与配置的集体通信库，用于可扩展的LLM训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786432)\n- [arxiv'26] [AutoOverlap：基于分块调度实现计算与通信的细粒度重叠](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20595)\n- [arxiv'26] [异构低带宽下的LLM预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.02360)\n- [EuroSys'26] [通过信号传递与重新排序实现高效且自适应的计算与通信重叠](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19519)\n- [arxiv'25] [LLM训练中通信可预测性的分析](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24750)\n- [arxiv'25] [UCCL-EP：可移植的专家并行通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19849)\n- [arxiv'25] [基于DMA的更细粒度计算通信重叠设计空间探索](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10236)\n- [arxiv'25] [在全栈AMD平台上训练基础模型：计算、网络与系统设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.17127)\n- [arxiv'25] [FarSkip-Collective：解除混合专家模型中阻塞式通信的束缚](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.11505)\n- [arxiv'25] [NCCL中的GPU发起式网络通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15076)\n- [SC workshop'25] [重新设计GROMACS晕交换：借助GPU发起的NVSHMEM提升强缩放性能](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731599.3767508)\n- [SC'25] [理解多节点LLM推理中的通信瓶颈](https:\u002F\u002Fsc25.supercomputing.org\u002Fproceedings\u002Fposters\u002Fposter_files\u002Fpost253s2-file3.pdf)\n- [SC'25] [大型GPU集群上共轭梯度法的CPU和GPU发起式通信策略](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3712285.3759774)\n- [SC'25] [SDR-RDMA：面向行星级RDMA通信的软件定义可靠性架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.05366)\n- [HotNets'25] [ML数据中心中的光子轨道](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.08119)\n- [arxiv'25] [用于高效ML通信卸载的DMA集体通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06605)\n- [arxiv'25] [面向10万+ GPU的集体通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20171)\n- [arxiv'25] [Uno：跨数据中心及内部数据中心拥塞控制与可靠连接的一站式解决方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15802)\n- [SOSP'25] [Mycroft：追踪集体通信中的依赖关系，以实现可靠的LLM训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731569.3764848)\n- [MICRO'25] [SuperMesh：面向加速器的节能集体通信](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756085)\n- [MICRO'25] [SkipReduce：（互连）网络稀疏化加速分布式机器学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756092)\n- [MICRO'25] [在环形网络上优化容错的全对全集体通信](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756057)\n- [arxiv'25] [MSCCL++：为前沿AI应用重新思考GPU通信抽象](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09014)\n- [arxiv'25] [迈向机器学习作业形态与集群拓扑的协同适配](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.03891)\n- [APNET'25] [以自动并行化重新思考动态网络与异构计算](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3735358.3735382)\n- [arxiv'25] [带有拖尾任务的高效AllReduce](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23523)\n- [arxiv'25] [TASP：拓扑感知的序列并行性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26541)\n- [NAIC @ SIGCOMM'25] [Chronos：为LLM训练预先安排的电路交换](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3748273.3749210)\n- [arxiv'25] [二叉树：通过优化通信局部性增强集体操作](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17311)\n- [SIGCOMM'25] [Falcon：一种可靠、低延迟的硬件传输协议](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3718958.3754353)\n- [SIGCOMM'25] [ByteScale：在16384个GPU上以2048K上下文长度实现LLM训练的通信效率规模化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3718958.3754352)\n- [SIGCOMM'25] [从ATOP到ZCube：自动化拓扑优化流水线及适用于大模型训练的高性价比网络拓扑](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3718958.3750503)\n- [SIGCOMM'25] [Astral：面向大规模语言模型训练的数据中心基础设施](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3718958.3750521)\n- [SIGCOMM'25] [ResCCL：面向集体通信的资源高效调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.09591)\n- [OSDI'25] [ZEN：以稀疏驱动的数据同步赋能分布式训练](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fwang-zhuang)\n- [OSDI'25] [通过FuseLink实现多网卡上的高效GPU通信](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fren)\n- [arxiv'25] [RoCE BALBOA：为SmartNICs提供的服务增强型数据中心RDMA](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.20412)\n- [arxiv'25] [RailX：面向超大规模LLM训练系统的灵活、可扩展且低成本的网络架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.18889)\n- [arxiv'25] [揭秘NCCL：深入分析GPU通信协议与算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.04786)\n- [APNET'25] [基于消息级别的信令实现AI工作负载的拥塞控制](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3735358.3735378)\n- [ASPLOS'25] [Concerto：面向大规模深度学习的自动通信优化与调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707223)\n- [ISCA'25] [Chimera：大型语言模型中混合并行性的通信融合](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3731025)\n- [arxiv'25] [NoLoCo：无需AllReduce的大模型低通信训练方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10911)\n- [arxiv'25] [TokenWeave：分布式LLM推理中的高效计算-通信重叠](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11329)\n- [arxiv'25] [FLASH：GPU集群中的快速全对全通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.09764)\n- [arxiv'25] [MCMComm：面向多芯片模块端到端通信的软硬件协同优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.00041)\n- [arxiv'25] [GenTorrent：利用叠加网络扩展大型语言模型服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.20101)\n- [arxiv'25] [Triton-distributed：使用Triton编译器在分布式AI系统上编程重叠内核](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19442)\n- [arxiv'25] [FlashOverlap：高效重叠通信与计算的轻量级设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19519)\n- [arxiv'25] [面向GPU网络的可扩展软件传输层](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17307) (`UCCL`) [[代码](https:\u002F\u002Fgithub.com\u002Fuccl-project\u002Fuccl)]\n- [HPCA'25] [提升大规模AI训练效率：C4解决方案用于实时异常检测与通信优化](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fhpca\u002F2025\u002F064700b246\u002F25Ko2hVHEEo)\n- [arxiv'25] [HeteroPod：面向通用云原生应用的XPU加速基础设施卸载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.23952)\n- [综述 :mag:] [arxiv'25] [面向HPC和ML应用的以GPU为中心的通信方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.24230v1)\n- [EuroMLSys'25] [TAGC：优化分布式Transformer训练中的梯度通信](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3721146.3721946)\n- [arxiv'25] [UB-Mesh：分层本地化的nD全网格数据中心网络架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20377)\n- [MLSys'25] [TileLink：利用以瓦片为中心的原语生成高效的计算-通信重叠内核](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20313)\n- [arxiv'25] [通信高效的语言模型训练规模可靠且稳健：DiLoCo的缩放定律](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.09799)\n- [NSDI'25] [AutoCCL：自动化集体通信调优，加速分布式和并行DNN训练](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fxu-guanbin)\n- [NSDI'25] [面向集体通信的高效直连拓扑](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.03356)\n- [arxiv'25] [InfinitePOD：利用光路交换收发器构建LLM用的数据中心级高带宽域](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03885)\n- [IEEE MICRO'25] [理解并表征分布式Transformer模型的通信特性](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fmagazine\u002Fmi\u002F5555\u002F01\u002F10849609\u002F23IcYe8Lr5m)\n- [arxiv'25] [在多租户SmartNIC上进行推荐系统的网络内预处理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12032)\n- [arxiv'25] [通过低带宽分区扩展前沿的大语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04266)\n- [arxiv'25] [负零的力量：量化大语言模型的数据类型定制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.04052)\n- [arxiv'25] [mFabric：面向混合专家训练的高效且可扩展的结构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.03905)\n- [NSDI'25] [OptiReduce：云端分布式深度学习中弹性且尾部最优的AllReduce](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06993)\n- [APNET'24] [理解分布式训练的通信特征](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3663408.3663409)\n- [arxiv'24] [TokenRing：通过双向通信实现无限上下文LLM的高效并行框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20501)\n- [arxiv'24] [以GPU为中心的通信图景](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.09874v2)\n- [arxiv'24] [重温AllReduce的时间成本模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.04202)\n- [arxiv'24] [LuWu：面向分布式GPU上100B规模模型网络内数据并行训练的端到端网络内核外优化器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.00918)\n- [HotInfra'24] [分布式AI任务的即时通信](https:\u002F\u002Fhotinfra24.github.io\u002Fpapers\u002Fhotinfra24-final2.pdf)\n- [NeurIPS'24] [SDP4Bit：迈向LLM训练中分片数据并行的4位通信量化](https:\u002F\u002Fopenreview.net\u002Fforum?id=PEEqnXlSCk)\n- [SC'24] [通过融合计算与集体操作优化分布式ML通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.06942)\n- [SC'24] [面向分布式AI的网络卸载带宽最优广播与Allgather]\n- [NeurIPS'24] [LSH-MoE：通过局部敏感哈希实现通信高效的MoE训练](https:\u002F\u002Fopenreview.net\u002Fforum?id=bjFhVbky5A)\n- [arxiv'24] [LumosCore：具有光学互连的高可扩展LLM集群](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01503)\n- [TPDS'24] [AutoDDL：接近最优带宽成本的自动分布式深度学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06813)\n- [HOTI'24] [统一集体通信(UCC)：面向CPU、GPU和DPU集体的统一库](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10664373)\n- [HOTI'24] [仅轨道：面向万亿参数LLM训练的低成本高性能网络](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10664412)\n- [SC'24] [无交换机的蜻蜓架构于晶圆之上：基于晶圆级集成的可扩展互连架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.10290)\n- [HPDC'24] [接近最优的晶圆级归约](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3625549.3658693)\n- [HPDC'24] [面向直连拓扑的高效全对全集体通信调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3625549.3658656)\n- [arxiv'24] [HiCCL：分层集体通信库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.05962)\n- [ICS'24] [gZCCL：面向GPU集群的压缩加速集体通信框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3650200.3656636)\n- [ICS'24] [Snoopie：多GPU通信剖析与可视化工具](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3650200.3656597)\n- [arxiv'24] [CSPS：基于通信高效的序列并行性，面向具有长提示的Transformer模型的服务系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15104)\n- [arxiv'24] [Domino：通过通用张量切片与重叠消除LLM训练中的通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15241)\n- [arxiv'24] [探索GPU到GPU通信：洞察超级计算机互连](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.14090v1)\n- [arxiv'24] [揭秘分布式Transformer模型的通信特征](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.10197)\n- [ICPP'24] [采用AlltoAll的稀疏梯度通信加速分布式深度学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673140)\n- [NAIC @ SIGCOMM'24] [分布式DNN训练模拟的灵活高保真方法的概念验证](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3672198.3673793)\n- [NAIC @ SIGCOMM'24] [Eloquent：更鲁棒的LLM令牌流传输方案](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3672198.3673797)\n- [NAIC @ SIGCOMM'24] [OmNICCL：零成本稀疏AllReduce，直接访问缓存与SmartNICs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3672198.3673804)\n- [HotNets'24] [我有99个问题，但FLOPS不是其中之一](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3696348.3696893)\n- [HotNets'24] [MLTCP：一种分布式技术，用于近似集中式流量调度以用于机器学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696348.3696878)\n- [HotNets'22] [机器学习集群中的拥塞控制](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3563766.3564115)\n- [SIGCOMM'24] [将机器学习集体通信重新思考为多商品流问题](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3651890.3672249)\n- [SIGCOMM'24] [面向元宇宙规模分布式训练的以太网RDMA](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672233)\n- [SIGCOMM'24] [使用消费级GPU加速多集群环境中的模型训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672228)\n- [SIGCOMM'24] [MCCS：面向多租户云的基于服务的集体通信方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672252)\n- [SIGCOMM'24] [Crux：面向深度学习训练的GPU高效通信调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672239)\n- [arxiv'24] [MLTCP：DNN训练中的拥塞控制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09589)\n  - [HotNets'24] [MLTCP：一种分布式技术，用于近似集中式流量调度以用于机器学习](https:\u002F\u002Fpeople.csail.mit.edu\u002Fghobadi\u002Fpapers\u002Fmltcp_hotnets_2024.pdf)\n- [arxiv'24] [ForestColl：在异构网络结构上实现高效集体通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.06787)\n- [APNet'24] [理解分布式训练的通信特征](https:\u002F\u002Fpeople.csail.mit.edu\u002Fzhizhenzhong\u002Fpapers\u002F2024_APNET.pdf)\n- [ICLR'24] [ZeRO++：极其高效的大型模型训练集体通信](https:\u002F\u002Fopenreview.net\u002Fforum?id=gx2BT0a9MQ)\n- [ICLR'24] CO2：完全计算-通信重叠的高效分布式训练\n  - [[arxiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.16265)] [[openreview](https:\u002F\u002Fopenreview.net\u002Fforum?id=ZO5cn4IfaN)]\n- [MLSys'24] [L-GreCo：逐层自适应的梯度压缩，用于高效准确的深度学习](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F9069a8976ff06f6443e7f4172990a580-Paper-Conference.pdf)\n- [MLSys'24] [Lancet：通过全图计算-通信重叠加速混合专家训练](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2024\u002Ffile\u002F339caf45a6fa281cae8adc6465343464-Paper-Conference.pdf)\n- [ASPLOS'24] [T3：透明跟踪与触发，实现计算与集体的细粒度重叠](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3620665.3640410)\n- [ASPLOS'24] [TCCL：发现PCIe GPU集群中更好的通信路径](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651362)\n- [ASPLOS'24] [Centauri：通过通信分区实现大型模型训练中计算-通信重叠的有效调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620666.3651379)\n- [ASPLOS'24] [Two-Face：结合集体与单边通信，实现高效分布式SpMM](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3620665.3640427)\n- [NSDI'24] [THC：利用张量同态压缩加速分布式深度学习](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fli-minghao)\n- [综述 :mag:] [arxiv'23] [通信高效的分布式深度学习：全面综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06307)\n- [arxiv'23] [面向数十亿参数大型语言模型训练的优化网络架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12169)\n- [arxiv'23] [FlexShard：面向产业级序列推荐模型的灵活分片](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.02959)\n- [arxiv'23] [重新思考内存与通信成本，以实现高效的大型语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06003)\n- [arxiv'23] [Zen：面向分布式DNN训练的接近最优的稀疏张量同步](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13254)\n- [arxiv'23] [面向数十亿参数大型语言模型训练的优化网络架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12169)\n- [arxiv'23] [TACOS：面向分布式训练的拓扑感知集体算法合成器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05301)\n- [INFOCOM'23] [Libra：面向高速网络中数据并行训练的争用感知GPU线程分配](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10228922)\n- [ICDCS'23] [bbTopk：带宽感知的稀疏Allreduce，通过分块稀疏化实现高效分布式训练](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10272502)\n- [ICML'23] [CocktailSGD：在超过500Mbps的网络上微调基础模型](https:\u002F\u002Fproceedings.mlr.press\u002Fv202\u002Fwang23t\u002Fwang23t.pdf)\n  - 与DT-FM（NeurIPS'22）相关\n- [IPDPS'23] [MCR-DL：面向深度学习的混合搭配通信运行时](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08374)\n- [ASPLOS'23] [MSCCLang：微软集体通信语言](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3575693.3575724)\n- [ASPLOS'23] [在大型深度学习模型中通过分解实现依赖计算的通信重叠](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3567955.3567959)\n- [EuroSys'23] A2TP：面向多租户学习的聚合器感知网络内聚合\n- [MLSys'23] Cupcake：面向可扩展通信高效分布式训练的压缩优化器\n- [MLSys'23] 关于优化模型并行通信的讨论\n- [NSDI'23] TopoOpt：联合优化网络拓扑与并行化策略，以应对分布式训练任务\n- [NSDI'23] Better Together：利用SYNDICATE联合优化ML集体调度与执行计划\n- [NSDI'23] TACCL：使用通信草图指导集体算法合成\n- [NSDI'23] [ARK：面向分布式深度学习的GPU驱动代码执行](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi23\u002Fpresentation\u002Fhwang)\n- [EuroSys'22] [乱序反向传播：一种有效的深度学习调度技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3492321.3519563)\n- [ISCA'22] Themis：面向DL模型分布式训练的网络带宽感知集体调度政策\n- [ISCA'22] [面向快速且可扩展的深度学习推荐模型训练的软硬件协同设计](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3470496.3533727)\n- [SC'22] HammingMesh：面向大规模深度学习的网络拓扑\n- [PPoPP'22] 接近最优的稀疏allreduce，用于分布式深度学习\n- [MLSys'22] [在分层系统上合成最佳并行放置与归约策略，用于深度学习](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002Ff0f9e98bc2e2f0abc3e315eaa0d808fc-Abstract.html) (`P^2`)\n- [ASPLOS'22] [打破分布式机器学习工作loads中的计算与通信抽象壁垒](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.05720) (`CoCoNET`)\n- [EuroSys'21] [DGCL：面向分布式GNN训练的高效通信库](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3447786.3456233)\n- [ICLR'21] [面向异构分层网络的多层级本地SGD](https:\u002F\u002Fopenreview.net\u002Fforum?id=C70cp4Cn32)\n- [SIGMOD'21] 面向异质性的分布式机器学习训练，通过部分归约[也见[2.5](#25-parallelism--distributed-training)]\n- [SC'21] Flare：灵活的网络内allreduce\n- [NSDI'21] [通过网络内聚合扩展分布式机器学习](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi21\u002Fpresentation\u002Fsapio)\n- [ISCA'21] [在分布式深度学习训练平台上实现计算-通信重叠](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1109\u002FISCA52012.2021.00049)\n- [PPoPP'21] [合成最佳集体算法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3437801.3441620) (`SCCL`)\n- [SIGCOMM'21] [SiP-ML：面向机器学习训练的高带宽光网络互连](https:\u002F\u002Fpeople.csail.mit.edu\u002Fghobadi\u002Fpapers\u002Fsipml_sigcomm_2021.pdf)\n- [ISCA'20] [面向共享内存多处理器集体加速的网络内架构](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1109\u002FISCA45697.2020.00085)\n- [NeurIPS'20] [Nimble：轻量级且并行的GPU任务调度，用于深度学习](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F5f0ad4db43d8723d18169b2e4817a160-Abstract.html)\n- [PPoPP'20] 通过部分集体操作驯服深度学习中不平衡的训练工作loads\n- [MLSys'20] [Blink：快速且通用的集体，用于分布式ML](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2020\u002Fhash\u002Fcd3a9a55f7f3723133fa4a13628cdf03-Abstract.html)\n- [MLSys'20] [PLink：发现并利用数据中心网络局部性，以实现高效的云端分布式训练](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2020\u002Fhash\u002Feca986d585a03890a412587a2f5ccb43-Abstract.html)\n- [OSDI'20] [面向异构GPU\u002FCPU集群的分布式DNN训练加速统一架构](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fjiang) (`BytePS`)\n- [MLSys'19] [基于优先级的参数传播，用于分布式DNN训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.03960) (`P3`)\n- [MLSys'19] TicTac：通过通信调度加速分布式深度学习\n- [SOSP'19] 一种通用的通信调度器，用于加速分布式DNN训练(`ByteScheduler`)\n- [ATC'17] Poseidon：面向GPU集群上分布式深度学习的高效通信架构\n\n## 容错与拖尾任务缓解\n- [arxiv'26] [基于KevlarFlow的大语言模型服务中的弹性研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22438)\n- [arxiv'26] [在10万张GPU上使用容错HSDP训练大语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.00277)\n- [PPoPP'26] [CCL-D：大规模模型训练中慢速与挂起异常的高精度诊断系统](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786429)\n- [PPoPP'26] [Elastor：用于容错分布式训练的弹性高效模型划分与检查点技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786445)\n- [arxiv'26] [利用Tarragon提升基于MoE的大语言模型推理的弹性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01310)\n- [NSDI'26] [气泡攻击：面向大型模型训练的抗拖尾流水线并行技术](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Fwu-tianyuan)\n- [arxiv'25] [TTrace：分布式训练的轻量级错误检测与诊断工具](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09280)\n- [arxiv'25] [用于大语言模型训练与推理的可靠且弹性的集合通信库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.25059)\n- [arxiv'25] [SHIFT：面向分布式训练的RDMA故障弹性层](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.11094)\n- [arxiv'25] [FFTrainer：在大语言模型训练中实现近乎无成本状态管理的快速故障转移](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.03644)\n- [arxiv'25] [FailSafe：高性能弹性推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.14116)\n- [arxiv'25] [GoCkpt：基于梯度辅助的多步重叠检查点技术，用于高效的大语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07035)\n- [MICRO'25] [在环形网络上优化带有容错的全对全集合通信](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756057)\n- [APSys'25] [不可或缺的以CPU为中心的GPU检查点技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3725783.3764394)\n- [CLUSTER'25] [Capricorn：具有动态感知能力的高效内存内检查点技术，适用于MoE模型训练](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186488\u002F2aCq0F0jiTu)\n- [arxiv'25] [MoE-PHDS：一个MoE检查点支持灵活的运行时稀疏性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23012)\n- [arxiv'25] [ElasWave：面向可扩展混合并行训练的原生弹性系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00606)\n- [arxiv'25] [带拖尾任务的高效AllReduce算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23523)\n- [SOSP'25] [Mycroft：通过追踪集合通信中的依赖关系实现可靠的大语言模型训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731569.3764848)\n- [SOSP'25] [字节跳动的稳健大语言模型训练基础设施](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.16293)\n- [SC'25] [LowDiff：通过低成本差分实现高效频繁检查点，适用于高性能分布式训练系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04084)\n- [OSDI'25] [利用假设分析理解大型模型训练中的拖尾任务](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Flin-jinkun)\n- [SIGMOD'25] [Malleus：通过可塑的数据与模型并行化实现抗拖尾的混合并行训练，用于大规模模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.13333)\n- [arxiv'25] [Checkmate：通过网络梯度复制实现零开销模型检查点](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.13522)\n- [ATC'25] [SAVE：针对GPU内存位翻转的软件实现容错，用于模型推理](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Fzheng)\n- [ATC'25] [通用检查点：一种灵活高效的分布式检查点系统，适用于具有可重构并行性的大规模DNN训练](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc25\u002Fpresentation\u002Flian)\n- [arxiv'25] [Adaptra：通过流水线自适应实现抗拖尾的混合并行训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19232v1)\n- [arxiv'25] [非均匀张量并行：减轻GPU故障对规模化大语言模型训练的影响](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06095)\n- [arxiv'25] [GPU弹性及其对AI\u002FHPC系统影响的特征化研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.11901)\n- [NSDI'25] [BCP：用于大型基础模型开发的统一检查点系统](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fwan-borui)\n- [NSDI'25] [Minder：用于大规模分布式模型训练的故障机器检测系统](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fdeng)\n- [EuroSys'25] [SkyServe：利用竞价实例跨区域和云端部署AI模型的服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.01438)\n- [ASPLOS'25] [PCcheck：面向ML的持久并发检查点技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707255)\n- [arxiv'24] [FALCON：精准定位并缓解大规模混合并行训练中的拖尾任务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12588)\n- [arxiv'24] [MoEtion：面向大规模专家混合模型的高效可靠检查点技术](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2412.15411)\n- [arxiv'24] [MoC-System：面向稀疏专家混合模型训练的高效容错方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.04307)\n- [arxiv'24] [TrainMover：无需内存开销的高效ML训练实时迁移技术](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2412.12636)\n- [arxiv'24] [云图：利用语言模型和因果洞察实现云系统的高效故障定位](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08694)\n- [arxiv'24] [ByteCheckpoint：用于大型基础模型开发的统一检查点系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.20143)\n- [arxiv'24] [通用检查点：面向大规模分布式训练的高效灵活检查点技术](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18820)\n- [arxiv'24] [Lazarus：通过自适应专家分配实现专家混合模型的弹性训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.04656)\n- [arxiv'24] [PARALLELGPUOS：基于验证推测的并发GPU级别检查点与恢复系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.12079)\n- [SOSP'24] [ReCycle：利用流水线自适应实现大型DNN的弹性训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.14009)\n- [HPDC'24] [DataStates-LLM：面向大语言模型的懒惰异步检查点技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3625549.3658685)\n- [EuroSys'24] [即时检查点：从深度学习训练失败中低成本恢复的方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3650085)\n- [NSDI'24] [Parcae：在抢占式实例上进行主动、优化活体输出的DNN训练](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi24\u002Fpresentation\u002Fduan)\n- [arxiv'23] [Unicron：规模化自我修复大语言模型训练的成本节约策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.00134)\n- [VLDB'23] [通过纠删码实现推荐模型训练的高效容错](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3611479.3611514)\n- [SOSP'23] [GEMINI：利用内存内检查点实现分布式训练中的快速故障恢复](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3600006.3613145)\n- [SOSP'23] [Oobleck：利用流水线模板实现大型模型的弹性分布式训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3600006.3613152)\n- [NSDI'23] [Bamboo：使抢占式实例更具弹性，从而实现大型DNN的经济高效训练]\n- [EuroSys'22] [Varuna：可扩展、低成本的大规模深度学习模型训练]\n- [ATC'22] [Sibylla：关于深度学习作业失败是否应重试的探讨]\n- [MLSys'21] [理解并改进深度学习推荐系统的部分恢复型容错训练](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2021\u002Fhash\u002Ff0f9e98bc2e2f0abc3e315eaa0d808fc-Abstract.html)\n- [FAST'21] [CheckFreq：高频、细粒度的DNN检查点技术]\n- [ICSE'20] [深度学习作业程序失败的实证研究]\n\n## GPU 内存管理与优化\n- [SC'25] [HELM：统一内存访问特性分析，以提升内存超分配下的 GPU 性能](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759812)\n- [SC'25] [MLP-Offload：用于 LLM 预训练的多级、多路径卸载，突破 GPU 内存墙](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02480)\n- [arxiv'25] [CARMA：具备 GPU 内存估算器的共置感知资源管理器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.19073)\n- [arxiv'25] [通过时空规划减少 GPU 内存碎片化，实现高效的大规模模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.16274)\n- [ISCA'25] [Forest：访问感知的 GPU UVM 管理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731047)\n- [EuroSys'25] [MEPipe：利用经济高效加速器上的内存高效切片级流水线调度， democratize LLM 训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3689031.3717469)\n- [EuroSys'25] [Mist：通过内存并行性协同优化实现大型语言模型的高效分布式训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.19050)\n- [FAST'25 WiP] Baton：在异构集群上为 LLM 训练编排 GPU 内存\n- [CGO'25] [IntelliGen：面向张量程序的指令级自动调优，结合单调性内存优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3696443.3708967)\n- [arxiv'25] [DeepSeek 模型训练过程中的内存分析](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.07846)\n- [IJCAI'24] [LLMem：预训练 LLM 微调时的 GPU 内存用量估算](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2024\u002F0699.pdf)\n- [MICRO'24] [SambaNova SN40L：借助数据流与专家组合突破 AI 内存墙](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07518v2)\n- [arxiv'24] [利用 4D 并行性和内存消耗估算器加速大型语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.06465)\n- [TACO'24] [ATP：通过智能 GPU 内存管理实现 DNN 训练的吞吐量峰值](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3701996)\n- [ICML'24] [GaLore：通过梯度低秩投影实现内存高效的 LLM 训练](https:\u002F\u002Fopenreview.net\u002Fforum?id=hYHsrKDiX7)\n- [ASPLOS'24] [GMLake：利用虚拟内存拼接技术，为大规模 DNN 训练提供高效透明的 GPU 内存去碎片化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08156)\n- [arxiv'23] [重新思考内存与通信开销，以实现高效的大规模语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06003)\n- [arxiv'23] 具有收敛保证的大模型量化分布式训练（`QSDP`）\n- [arxiv'23] 压缩激活值是否有助于模型并行训练？\n- [SoCC'23] 向规模化分布式训练的 GPU 内存效率迈进\n- [VLDB'23] [PyTorch FSDP：全分片数据并行扩展经验](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp3848-huang.pdf)\n- [SOSP'23] 使用 PagedAttention 实现大型语言模型推理的高效内存管理\n- [HPCA'23] MPress：通过节省内存的算子间并行化，在多 GPU 服务器上 democratize 十亿参数级模型训练\n- [HPCA'23] 多 GPU 训练系统中的张量移动编排\n- [IJCAI'23] [OSDP：分布式深度学习的最佳分片数据并行方案](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2023\u002F0238.pdf)\n- [ICLR'22] [LoRA：大型语言模型的低秩适应](https:\u002F\u002Fopenreview.net\u002Fforum?id=nZeVKeeFYf9)\n  - 一种算法层面的内存效率方法\n- [VLDB'22] Harmony：克服 GPU 内存容量限制，在通用服务器上训练超大规模 DNN 模型\n- [ATC'21] ZeRO-Offload：democratize 十亿参数级模型训练\n- [ICLR'21] ActNN：通过 2 位激活压缩训练降低训练内存占用\n- [ICLR'21] 动态张量再材料化\n- [SC'21] ZeRO-infinity：突破极端规模深度学习的 GPU 内存墙\n- [HPCA'21] [Sentinel：针对深度学习的异构内存系统中的高效张量迁移与分配](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9407112)\n- [MLSys'20] Checkmate：通过最优张量再材料化打破内存墙\n- [ASPLOS'20] Capuchin：基于张量的深度学习 GPU 内存管理\n- [ASPLOS'20] SwapAdvisor：通过智能交换将深度学习推向超出 GPU 内存限制的境界\n- [ESEC\u002FFSE'20] 估算深度学习模型的 GPU 内存消耗\n- [SC'20] ZeRO：面向万亿参数模型训练的内存优化\n- [ISCA'18] Gist：深度神经网络训练的高效数据编码\n- [PPoPP'18] Superneurons：用于深度神经网络训练的动态 GPU 内存管理\n- [MICRO'16] vDNN：虚拟化深度神经网络，实现可扩展、内存高效的神经网络设计\n- [arxiv'16] 以次线性内存成本训练深度网络\n\n## GPU 共享\n- [arxiv'25] [MSched：通过主动内存调度实现 GPU 多任务处理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.24637)\n- [SC workshop'25] [WAGES：面向节能无服务器大模型推理的负载感知 GPU 共享系统](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767396)\n- [SOSP'25] [LithOS：用于 GPU 上高效机器学习的操作系统](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3731569.3764818)\n- [arxiv'25] [迈向 LLM 时代下高效实用的 GPU 多任务处理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.08448)\n- [arxiv'25] [Prism：释放 GPU 共享潜力，实现低成本多大模型推理服务](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.04021)\n- [OSDI'25] [XSched：面向多样化 XPU 的抢占式调度](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fshen-weihang)\n- [EuroSys'25] [通过自适应无气泡时空共享提升 GPU 共享性能](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3689031.3696070)\n- [PPOPP'25] [SGDRC：面向 NVIDIA GPU 上并发 DNN 推理的软件定义动态资源控制](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3710848.3710863)\n- [arxiv'24] [PREBA：面向多实例 GPU 的 AI 推理服务器的软硬件协同设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.19114)\n- [SC'24] [ParvaGPU：面向云环境大规模 DNN 推理的高效空间 GPU 共享](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.14447)\n- [arxiv'24] [Tally：面向并发深度学习工作负载的非侵入式性能隔离](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07381)\n- [ICPP'24] [MIGER：将多实例 GPU 与多进程服务集成用于深度学习集群](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3673038.3673089)\n- [ASPLOS'24] [RAP：面向多 GPU 推荐模型训练及输入预处理的资源感知自动化 GPU 共享](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640406)\n- [EuroSys'24] [Orion：面向 ML 应用的干扰感知、细粒度 GPU 共享](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3627703.3629578)\n- [ATC'23] [警惕碎片化：基于碎片化梯度下降调度 GPU 共享工作负载](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc23\u002Fpresentation\u002Fweng)\n- [NSDI'23] [面向深度学习工作负载的容器云中透明 GPU 共享](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi23\u002Fpresentation\u002Fwu)\n- [ICPP'23] FaST-GShare：在无服务器计算中为深度学习推理启用高效的时空 GPU 共享\n- [arxiv'23] [GACER：面向多租户深度学习的粒度感知并发调控](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11745)\n- [arxiv'23] MuxFlow：在大规模生产级深度学习集群中实现高效安全的 GPU 共享\n- [SoCC'22] MISO：在多租户 GPU 集群上利用多实例 GPU 能力\n- [PACT'22] GPUPool：一种面向云端细粒度 GPU 共享的整体方法\n- [ATC'21] Zico：面向并发 DNN 训练的高效 GPU 内存共享\n- [MLSys'20] Salus：面向深度学习应用的细粒度 GPU 共享原语\n- [OSDI'20] AntMan：面向深度学习的 GPU 集群动态扩缩容\n- [OSDI'20] PipeSwitch：面向深度学习应用的快速流水线上下文切换\n- [RTAS'19] [分数 GPU：面向 GPU 的基于软件的计算与内存带宽预留](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8743200)\n\n## 编译器\n- [arxiv'26] [Axe: 一种用于机器学习编译器的简单统一布局抽象](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19092)\n- [arxiv'25] [Tawa: 面向具有异步引用的现代 GPU 的自动 Warp 特化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14719)\n- [arxiv'25] [Dato: 面向数据流加速器的任务型编程模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.06794)\n- [arxiv'25] [Flashlight: 用于加速注意力变体的 PyTorch 编译器扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02043)\n- [NeurIPS'25] [REASONING COMPILER: LLM 引导的优化技术，用于高效模型推理](https:\u002F\u002Fopenreview.net\u002Fpdf?id=2D4TuZyNnr)\n- [SOSP'25] [Mercury: 通过远程内存调度解锁面向 LLM 的多 GPU 算子优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764798)\n- [MICRO'25] [StreamTensor: 让张量在面向 LLM 的数据流加速器中流水运行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.13694)\n- [OSDI'25] [PipeThreader: 面向高效 DNN 执行的软件定义流水线技术](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fcheng)\n- [OSDI'25] [QiMeng-Xpiler: 基于神经符号方法为深度学习系统转译张量程序](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fdong)\n- [OSDI'25] [Mirage: 面向张量程序的多级超优化器](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fwu-mengdi)\n- [OSDI'25] [KPerfIR: 面向现代 AI 工作负载的 GPU 内核性能工具，构建开放且以编译器为中心的生态体系](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fguan)\n- [arxiv'25] [TileLang: 一种适用于 AI 系统的可组合分块编程模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17577)\n- [arxiv'25] [Hexcute: 一种基于分块的编程语言，具备自动布局与任务映射合成能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16214)\n- [arxiv'25] [DeepCompile: 一种由编译器驱动的分布式深度学习训练优化方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09983)\n- [ASPLOS'25] [Mosaic: 利用 iTex 网格划分在深度学习加速器上挖掘指令级并行性](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716262)\n- [ASPLOS'25] [Concerto: 大规模深度学习的自动通信优化与调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707223)\n- [arxiv'25] [Hercules: 一款用于高效编写异构系统程序的编译器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10855)\n- [CC'25] [LLM 编译器: 以基础语言模型为基础的编译优化技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3708493.3712691)\n- [CGO'25] [IntelliGen: 面向张量程序的指令级自动调优，结合单调性内存优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3696443.3708967)\n- [SOSP'24] [利用 T10 在核间互联智能处理器上扩展深度学习计算](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695955)\n- [OSDI'23] Cocktailer: 分析与优化深度学习中的动态控制流\n- [OSDI'23] Welder: 通过分块图调度深度学习内存访问\n- [OSDI'23] 有效调度深度神经网络的计算图，以适配其领域专用加速器\n- [OSDI'23] EINNET: 基于推导式变换优化张量程序\n- [OSDI'23] [利用 Brainstorm 优化动态神经网络](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Fcui)\n- [OSDI'22] [ROLLER: 面向深度学习的快速高效张量编译](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi22\u002Fpresentation\u002Fzhu)\n- [OSDI'20] [Rammer: 通过 rTasks 实现全面的深度学习编译器优化](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fma)\n- [OSDI'20] [Ansor: 生成高性能深度学习张量程序](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fzheng)\n- [ASPLOS'20] [FlexTensor: 一套针对异构系统上张量计算的自动调度探索与优化框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3373376.3378508)\n- [OSDI'18] [TVM: 一款面向深度学习的自动化端到端优化编译器](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi18\u002Fpresentation\u002Fchen)\n\n## GPU内核优化\n- [ASPLOS'26] [Tilus：面向低精度计算的分块级通用GPU编程语言](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3760250.3762219)\n- [EuroSys'26] [通过信号传递与重排序实现高效且自适应的计算与通信重叠](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.19519)\n- [arxiv'25] [Mirage持久化内核：用于张量程序巨内核化的编译器与运行时](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.22219)\n- [arxiv'25] [KernelEvolve：在Meta公司为异构AI加速器规模化推进代理式内核编程](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.23236)\n- [arxiv'25] [在资源受限的GPU上内存高效的块低秩基础模型加速](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20861)\n- [arxiv'25] [FlashFuser：通过核心间连接扩展计算密集型算子的内核融合规模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12949)\n- [arxiv'25] [Flash多头前馈网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.06989)\n- [arxiv'25] [Iris：Triton中的第一类多GPU编程体验](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.12500)\n- [arxiv'25] [AccelOpt：用于AI加速器内核优化的自我改进型LLM代理系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15915)\n- [arxiv'25] [ParallelKittens：多GPU AI内核的系统性与实用性简化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13940)\n- [SC'25] [HyTiS：增强波次利用与缓存局部性的GPU GEMM混合分块调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759771)\n- [SC'25] [UltraAttn：通过层次化上下文分块高效并行化注意力机制](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759894)\n- [arxiv'25] [HipKittens：快速而强劲的AMD内核](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08083)\n- [TACO'25] [HuntKTm：现代GPU上高效内核执行的混合调度与自动管理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774652)\n- [NeurIPS'25] [FlashMoE：单个内核中的快速分布式MoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04667)\n- [MLSys'25] [FlashInfer：面向LLM推理服务的高效且可定制注意力引擎](https:\u002F\u002Fopenreview.net\u002Fforum?id=RXPofAsL8F)\n- [arxiv'25] [LiquidGEMM：面向高性能LLM服务的硬件高效W4A8 GEMM内核](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.01229)\n- [arxiv'25] [TileLang：面向AI系统的可组合分块编程模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17577)\n- [PLDI'25] [现代GPU上的基于任务的张量计算](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07004)\n- [TACO'25] [Kitsune：在GPU上启用数据流执行](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3777466)\n- [ICLR'25] [ThunderKittens：简单、快速且可爱的内核](https:\u002F\u002Fopenreview.net\u002Fforum?id=0fJfVOSUra)\n- [PLDI'25] [现代GPU上的基于任务的张量计算](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3729262)\n- [ASPLOS'25] [通过任务与内核融合编排分布式计算](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707216)\n- [MLSys'25] [FastTree：面向树形结构LLM推理的注意力内核与运行时优化](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3278)\n- [arxiv'24] [ACS：在不规则、输入依赖的计算图上并发内核执行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12377)\n- [arxiv'24] [Flex Attention：用于生成优化注意力内核的编程模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.05496)\n- [NeurIPS'24] [FlashAttention-3：具有异步性和低精度的快速准确注意力](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2024\u002Fhash\u002F7ede97c3e082c6df10a8d6103a2eebd2-Abstract-Conference.html)\n- [ICLR'24] [FlashAttention-2：更优并行性与工作划分下的更快注意力](https:\u002F\u002Fopenreview.net\u002Fforum?id=mZn2Xyh9Ec)\n- [CGO'24] [用于依赖型GPU内核细粒度同步的框架](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10444873)\n- [RTAS'24] [揭秘NVIDIA GPU内部机制以实现可靠的GPU管理](https:\u002F\u002Fwww.cs.unc.edu\u002F~jbakita\u002Frtas24-private.pdf)\n  - 幻灯片：[链接](https:\u002F\u002Fwww.cs.unc.edu\u002F~jbakita\u002Frtas24_slides.pdf)\n- [arxiv'23] [Stream-K：以工作为中心的并行分解，用于GPU上的稠密矩阵乘法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.03598)\n- [OSDI'23] [Welder：通过分块图调度深度学习内存访问](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Fshi)\n- [arxiv'21] [在深度学习工作负载下对NVIDIA GPU并发机制的特性分析](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.00459)\n- [SIGMETRICS'21] [揭示NVIDIA GPU线程块调度器针对并发内核的放置策略](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3453953.3453972)\n- [NeurIPS'20] [Nimble：轻量级且并行的深度学习GPU任务调度](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F5f0ad4db43d8723d18169b2e4817a160-Abstract.html)\n- [NeurIPS'22] [FlashAttention：IO感知的快速且内存高效的精确注意力](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F67d57c32e20fd0a7a302cb81d36e40d5-Abstract-Conference.html)\n- [RTSS'17] [NVIDIA TX2上的GPU调度：隐藏细节大揭秘](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8277284)\n\n## 大模型长上下文\n- [SC'25] [UltraAttn：通过层次化上下文分块实现高效的并行注意力机制](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3712285.3759894)\n- [SC'25] [RingX：面向高性能计算的长上下文学习可扩展并行注意力机制](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3712285.3759859)\n- [arxiv'25] [通过细粒度序列并行优化长上下文大模型推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.06247)\n- [NeurIPS'25] [StarTrail：同心环状序列并行策略，用于高效训练近乎无限上下文的Transformer模型](https:\u002F\u002Fopenreview.net\u002Fforum?id=PxximqJil4)\n- [arxiv'25] [长上下文注意力基准测试：从核效率到分布式上下文并行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.17896)\n- [arxiv'25] [通过核心注意力解耦实现高效的长上下文语言模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.18121)\n- [SOSP'25] [DCP：利用动态上下文并行应对长上下文训练中的输入动态性](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764849)\n- [arxiv'25] [以数据为中心的弹性流水线并行，用于高效训练长上下文大模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.21275)\n- [arxiv'25] [Strata：面向长上下文语言模型推理的服务端层次化上下文缓存](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18572)\n- [arxiv'25] [TokenLake：统一的段级前缀缓存池，用于细粒度弹性长上下文大模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.17219v1)\n- [ACL'25] [MiniKV：通过压缩与系统协同设计，将2比特KV缓存极限推向极致，实现高效长上下文推理](https:\u002F\u002Faclanthology.org\u002F2025.findings-acl.952.pdf)\n- [arxiv'25] [HelixPipe：采用注意力并行与流水线并行相结合的方式，高效分布式训练长序列Transformer模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.00394)\n- [arxiv'25] [SALE：低比特估计技术，用于长上下文大模型预填充阶段的高效稀疏注意力计算](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24179)\n- [arxiv'25] [通过分块优化高效训练长上下文大模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16710)\n- [arxiv'25] [SlimPipe：面向长上下文大模型训练的内存友好型高效流水线并行策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.14519)\n- [ASPLOS'25] [FlexSP：通过灵活的序列并行加速大型语言模型训练](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3715998)\n- [arxiv'25] [XAttention：带有反对角线打分的块稀疏注意力机制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.16428)\n- [arxiv'25] [SPPO：通过自适应序列流水线并行卸载，高效训练长序列大模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10377)\n- [arxiv'25] [ByteScale：在超过12,000张GPU上，以2048K上下文长度高效扩展大模型训练规模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.21231)\n- [arxiv'25] [结合检索的推测解码实现长上下文推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20330)\n- [PODC'25] [支持极端长序列Transformer模型训练的系统优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3662158.3662806)\n- [arxiv'25] [ParallelComp：用于长度外推的并行长上下文压缩器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14317)\n- [arxiv'25] [LServe：采用统一稀疏注意力机制的高效长序列大模型推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14866)\n- [arxiv'25] [MoBA：面向长上下文大模型的混合块注意力机制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.13189)\n- [arxiv'25] [Tactic：面向长上下文大模型的自适应稀疏注意力机制，结合聚类与分布拟合](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12216)\n- [arxiv'25] [APB：通过跨GPU传递压缩上下文块加速分布式长上下文推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12085)\n- [SIGMOD'25] [MEMO：面向超长上下文大模型训练的细粒度张量管理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3709703)\n- [arxiv'25] [Twilight：结合层次化Top-p剪枝的自适应注意力稀疏策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.02770)\n- [arxiv'25] [伴随切片技术，用于状态空间模型的超长上下文训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00692)\n- [arxiv'24] [LoL-PIM：基于可扩展DRAM-PIM系统的长上下文大模型解码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20166)\n- [arxiv'24] [以数据为中心、适应异构性的序列并行，用于高效大模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.01523)\n- [ICLR'24] [具有注意力汇流的高效流式语言模型](https:\u002F\u002Fopenreview.net\u002Fforum?id=NG7sS51zVF) [[代码](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fstreaming-llm)]\n- [SOSP'24] [LoongServe：利用弹性序列并行高效服务长上下文大型语言模型](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3694715.3695948)\n- [arxiv'24] [USP：面向长上下文生成式AI的统一序列并行方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.07719)\n- [arxiv'24] [采用全流水线分布式Transformer架构训练超长上下文语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16978v1)\n- [NeurIPS'24研讨会] [大型语言模型的长上下文RAG性能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03538)\n- [arxiv'24] [ShadowKV：为高吞吐量长上下文大模型推理服务的影子KV缓存](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.21465)\n- [arxiv'24] [Mnemosyne：无需近似即可高效处理数百万上下文长度的大模型推理请求的并行化策略](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.17264)\n- [arxiv'24] [CSPS：一种通信高效的序列并行服务系统，适用于具有长提示的Transformer模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.15104)\n- [COLM'24] [TriForce：通过层次化推测解码无损加速长序列生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11912)\n- [arxiv'24] [FocusLLM：通过并行解码扩展大模型上下文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11745)\n- [综述 :mag:] [IJCAI'24] [X-former Elucidator：重振面向长上下文语言建模的高效注意力机制](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2024\u002F904)\n\n## 模型压缩\n> 有关量化论文的完整列表，请参阅 https:\u002F\u002Fgithub.com\u002FEfficient-ML\u002FAwesome-Model-Quantization。\n\n- [PPoPP'26] [JanusQuant：面向长上下文推理的高精度高效2位KV缓存量化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786428)\n- [PPoPP'26] [RoMeo：通过旋转混合精度量化缓解双维度异常值](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786419)\n- [PPoPP'26] [高吞吐量非均匀量化3位LLM推理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3774934.3786423)\n- [arxiv'26] [面向NVFP4推理精度恢复的量化感知蒸馏](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.20088)\n- [arxiv'25] [弥合微尺度FP4量化从理论承诺到实际性能之间的差距](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.23202)\n- [EMNLP'25] [缩小规模，快速服务：为推荐系统压缩并部署高效LLM](https:\u002F\u002Faclanthology.org\u002F2025.emnlp-industry.119.pdf)\n- [NeurIPS'25] [体积减70%，精度不减：通过动态长度浮点（DFloat11）实现高效GPU推理的无损LLM压缩](https:\u002F\u002Fopenreview.net\u002Fforum?id=xdNAVP7TGy)\n- [arxiv'25] [MergeMoE：通过专家输出合并实现MoE模型的高效压缩](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.14436)\n- [CLUSTER'25] [SplitQuant：基于相位感知模型划分与自适应量化，在异构GPU上实现资源高效的LLM离线推理](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Fcluster\u002F2025\u002F11186491\u002F2aCq16HCtPO)\n- [JMLR'25] [BitNet：面向大型语言模型的1位预训练](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume26\u002F24-2050\u002F24-2050.pdf)\n- [OSDI'25] [DecDEC：推进低比特LLM量化的一种系统级方法](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi25\u002Fpresentation\u002Fpark-yeonhong)\n- [arxiv'25] [TAH-QUANT：在慢速网络上的流水线并行中进行有效的激活量化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.01352)\n- [arxiv'25] [DECA：一种支持乱序调用的近核LLM解压缩加速器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19349)\n- [arxiv'25] [ITERA-LLM：通过迭代张量分解提升8位以下大型语言模型的推理性能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.08981)\n- [ISCA'25] [Transitive Array：一种具有结果重用功能的高效GEMM加速器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16339)\n- [arxiv'24] [利用无损同态压缩加速分布式深度学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07529)\n- [ICML'24] [Any-Precision LLM：低成本部署多种不同规模的LLM](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fpark24e.html)\n- [ACL'23] [逐层蒸馏！以更少的训练数据和更小的模型规模超越更大的语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.02301)\n- [ICLR'23] [GPTQ：面向生成式预训练Transformer的高精度后训练量化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.17323)\n- [OSDI'23] [AdaEmbed：面向大规模推荐模型的自适应嵌入](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi23\u002Fpresentation\u002Flai)\n- [EuroSys'23] 高速DNN训练与Espresso：借助近最优使用策略释放梯度压缩的全部潜力\n- [ICML'22] [TSPipe：借助流水线更快地向教师学习](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Flim22a.html)\n\n## 联邦学习\n- [VLDB'25] [PS-MI：垂直联邦学习中的准确、高效且私密的数据估值](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol18\u002Fp3559-zhou.pdf)\n- [arxiv'24] [FedMoE：通过异构专家混合模型实现个性化联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.11304v1)\n- [MLSys'24] [LIFL：一种轻量级、事件驱动的无服务器联邦学习平台](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.10968)\n- [arxiv'24] [FedEx：通过重叠计算与参与者选择加速异构移动设备上的联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.00943)\n- [KDD'24] [FedBiOT：在不使用完整模型的情况下进行联邦学习中的本地大语言模型微调](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17706)\n- [CCGrid'24] [Apodotiko：在异构环境中实现高效的无服务器联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.14033)\n- [EuroSys'24] [Dordis：具有丢弃鲁棒性的差分隐私的高效联邦学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3627703.3629559)\n- [arxiv'24] [解耦的垂直联邦学习：用于垂直划分数据的实际训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03871v1)\n- [SAC'24] [在无服务器联邦学习中利用知识蒸馏训练异构客户端模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.07295)\n- [arxiv'23] [CAFE：地理分布的数据中心中的碳感知联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03615)\n- [arxiv'23] [采用参数高效的提示调优和自适应优化的大语言模型联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.15080)\n- [IMWUT'23] [AttFL：用于时间序列移动及嵌入式传感器数据处理的个性化联邦学习框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3610917)\n- [综述 :mag:] [FGCS'23] [联邦学习中的模型聚合技术：全面综述](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0167739X23003333)\n- [SoCC'23] [Auxo：通过可扩展的客户端聚类缓解异构性以实现联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.16656)\n- [MLSys'23] [GlueFL：协调客户端采样与模型掩码以实现带宽高效的联邦学习](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper_files\u002Fpaper\u002F2023\u002Fhash\u002F3ed923f9f88108cb066c6568d3df2666-Abstract-mlsys2023.html)\n- [WWW'23] [存储还是不存储？有限存储空间下的联邦学习在线数据选择](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.00195)\n- [EuroSys'23] REFL：资源高效的联邦学习\n- [VLDB'23] [FederatedScope：面向异构性的灵活联邦学习平台](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp1059-li.pdf)\n- [RecSys'22] [迈向公平的联邦推荐学习：系统与数据异构性的相互依赖性分析](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02633)\n- [TMLR'22] [联邦学习中的最优客户端采样](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13723)\n- [ICML'22] FedScale：大规模联邦学习的模型与系统性能基准测试\n- [MobiSys'22] FedBalancer：为异构客户端上的高效联邦学习提供数据与节奏控制\n- [MobiCom'22] PyramidFL：用于高效联邦学习的细粒度客户端选择框架\n- [MLSys'22] PAPAYA：实用、私密且可扩展的联邦学习\n- [AISTATS'22] 带缓冲的异步聚合联邦学习\n- [NeurIPS'21] 联邦重建：部分本地化的联邦学习\n- [NeurIPS'21] FjORD：在异构目标下，通过有序丢弃实现公平且准确的联邦学习\n- [OSDI'21] Oort：通过引导式参与者选择实现高效联邦学习\n- [MICRO'21] AutoFL：支持异构性的节能联邦学习\n- [MLSys'19] 向规模化联邦学习迈进：系统设计\n- [综述 :mag:] [ACM CSUR'22] 智慧医疗中的联邦学习：综述\n\n## 隐私保护机器学习\n- [arxiv'26] [扩展隐私保护机器学习：Llama-2-7B的CKKS实现](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.18511)\n- [CCS'25] [MoEcho：利用侧信道攻击破坏专家混合模型中的用户隐私](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.15036)\n- [USENIX Security'25] [Phantom：在异构TEE和GPU系统中进行隐私保护的深度神经网络模型混淆](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fusenixsecurity25\u002Fsec25cycle1-prepub-1136-bai.pdf)\n- [ASPLOS'24] [LazyDP：为可扩展的差分隐私推荐模型训练协同设计算法与软件](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3620665.3640384)\n- [NeurIPS'24] [Nimbus：面向Transformer的安全高效两方推理](https:\u002F\u002Fopenreview.net\u002Fforum?id=G7QS68ICPJ)\n- [ACL'24] [SecFormer：通过SMPC实现快速且准确的Transformer模型隐私保护推理](https:\u002F\u002Faclanthology.org\u002F2024.findings-acl.790\u002F)\n- [S&P'24] [BOLT：面向Transformer的隐私保护、准确且高效的推理](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10646705)\n- [DAC'23] 在异构神经网络加速器上使用预取元键进行隐私保护的DNN训练\n- [ICLR'23] [MPCFormer：使用MPC实现快速、高性能且私密的Transformer推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01452)\n- [NeurIPS'22] [Iron：面向Transformer的私密推理](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F64e2449d74f84e5b1a5c96ba7b3d308e-Abstract-Conference.html)\n\n## ML API与应用端优化\n- [ASPLOS'25] [借助Ayo实现基于LLM的应用端到端优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716278)\n- [arxiv'24] [APIServe：为大型语言模型推理提供高效的API支持](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.01869)\n- [OSDI'24] [ChameleonAPI：自动且高效地为ML应用定制神经网络](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fliu)\n- [ICML'22] [面向多标签分类任务的高效在线ML API选择](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fchen22ad.html)（`FrugalMCT`）\n- [NeurIPS'20] [FrugalML：如何更准确、更经济地使用ML预测API](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F789ba2ae4d335e8a2ad283a3f7effced-Abstract.html)\n\n## 系统领域的机器学习\n- [arxiv'25] [AccelOpt：用于AI加速器内核优化的自进化LLM代理系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15915)\n- [arxiv'25] [ASAP：大规模LLM训练性能自动优化的代理解决方案](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.03844)\n- [NeurIPS'25] [推理编译器：LLM引导的高效模型推理优化](https:\u002F\u002Fopenreview.net\u002Fpdf?id=2D4TuZyNnr)\n- [arxiv'25] [城门之外的蛮族：AI如何颠覆系统研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.06189) [[代码](https:\u002F\u002Fgithub.com\u002FUCB-ADRS\u002FADRS)]\n- [arxiv'25] [SuperCoder：基于大语言模型的汇编程序超优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.11480)\n- [HotOS'25] [我如何学会不再担心，转而喜爱学习型操作系统策略](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3713082.3730384)\n- [VLDB'25] [E2ETune：通过微调生成式语言模型进行端到端参数调优](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11581)\n- [SenSys'25] [CheckMate：LLM驱动的近似间歇性计算](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3715014.3722056)\n- [ICSE'25] [大语言模型作为配置验证器](https:\u002F\u002Fwww.computer.org\u002Fcsdl\u002Fproceedings-article\u002Ficse\u002F2025\u002F056900a204\u002F215aWCaXlSg)\n- [NeurIPS'24] [IaC-Eval：基础设施即代码程序的代码生成基准测试](https:\u002F\u002Fwww.cs-pk.com\u002Fpreprint-iac-eval.pdf)\n- [arxiv'24] [云图：利用语言模型和因果洞察实现云系统的高效故障定位](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.08694)\n- [arxiv'24] [LLMTune：用大语言模型加速数据库参数调优](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11581)\n- [SIGCOMM'24] [NetLLM：面向网络的大语言模型适配](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3651890.3672268)\n- [arxiv'24] [LLM增强的数据管理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.02643)\n- [arxiv'24] [MPIrigen：通过领域专用语言模型生成MPI代码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.09126)\n- [arxiv'24] [大语言模型能编写并行代码吗？](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12554)\n- [arxiv'23] [LLM辅助代码清理：用于训练准确代码生成器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.14904)\n- [arxiv'23] [大语言模型在编译器优化中的应用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.07062)\n- [VLDB'23] [大语言模型将如何颠覆数据管理](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol16\u002Fp3302-fernandez.pdf)\n\n## 能源效率\n- [arxiv'26] [能量去哪儿了？诊断推理能耗](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.22076)\n- [arxiv'26] [Kareus：大型模型训练中动态与静态能耗的联合降低](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17654)\n- [arxiv'26] [GreenServ：面向多模型LLM推理的节能上下文感知动态路由](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.17551)\n- [NeurIPS'25] [CATransformers：通过模型与硬件联合优化实现碳意识Transformer](https:\u002F\u002Fopenreview.net\u002Fpdf?id=IjMZfMVyLF)\n- [MICRO'25] [SuperMesh：面向加速器的节能集体通信](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756085)\n- [MICRO'25] [分布式训练效率的特性分析：从功耗、性能和热管理的角度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.10371)\n- [arxiv'25] [VoltanaLLM：反馈驱动的频率控制与状态空间路由，用于节能LLM推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04827)\n- [arxiv'25] [GreenLLM：面向SLA的动态频率调节，用于节能LLM推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.16449)\n- [arxiv'25] [AI训练数据中心的电源稳定化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.14318)\n- [arxiv'25] [ML.ENERGY基准测试：迈向自动化推理能耗测量与优化](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2505.06371)\n- [arxiv'25] [EcoServe：通过主动的实例内及实例间编排，实现经济高效的LLM推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.18154)\n- [NSDI'25] [GREEN：面向机器学习集群的碳高效资源调度](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fxu-kaiqiang)\n- [HPCA'25] throttLL'eM：预测性GPU节流技术，用于节能LLM推理服务\n- [arxiv'25] [EcoServe：设计碳意识AI推理系统](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.05043)\n- [arxiv'25] [AI硬件全生命周期排放：从摇篮到坟墓的方法及世代趋势](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01671)\n- [arxiv'24] [GreenLLM：在异构GPU上解耦大型语言模型推理，以降低碳排放](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.20322)\n- [arxiv'24] [EaCO：资源共享动态及其对DNN训练能源效率的影响](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.08294)\n- [arxiv'24] [DynamoLLM：为性能与能源效率设计LLM推理集群](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00741)\n- [SOSP'24] [Perseus：消除大型模型训练中的能源膨胀](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.06902)\n- [arxiv'23] [CAFE：在地理分布的数据中心中开展碳意识联邦学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03615)\n- [ATC'23] EnvPipe：保存性能的DNN训练框架，以节约能源\n- [NSDI'23] Zeus：理解并优化DNN训练的GPU能耗\n\n## 检索增强生成（RAG）\n- [ICDE'25] [SAGE：RAG的精准检索框架](https:\u002F\u002Fdbgroup.cs.tsinghua.edu.cn\u002Fligl\u002Fpapers\u002FICDE25-SAGE.pdf)\n- [SOSP'25] [HedraRAG：针对异构RAG工作流的生成与检索协同优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731569.3764806)\n- [ISCA'25] [HeterRAG：面向检索增强生成的异构存内处理加速](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731089)\n- [arxiv'25] [Patchwork：RAG推理的统一框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.07833)\n- [arxiv'25] [通过推测加速检索增强语言模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14021)\n- [arxiv'25] [RAGO：检索增强生成推理的系统性性能优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14649)\n- [arxiv'25] [使用检索增强的推测解码进行长上下文推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.20330)\n- [VLDB'25] [Chameleon：面向检索增强语言模型的异构解耦加速系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09949)\n- [arxiv'24] [迈向理解检索增强生成模型推理中的系统权衡](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11854)\n- [arxiv'24] [RAGServe：具有配置自适应功能的快速高质量RAG系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.10543v1)\n- [arxiv'24] [通过去幻觉化实现检索增强生成的并行上下文扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.14905)\n- [arxiv'24] [通过推测加速检索增强语言模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.14021)\n- [NeurIPS'24研讨会] [大语言模型的长上下文RAG性能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03538)\n\n## 仿真\n- [arxiv'26] [SynPerf：用于 GPU 性能预测的混合解析-机器学习框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14910)\n- [arxiv'26] [Revati：面向 LLM 服务的透明无 GPU 时间卷绕仿真](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.00397)\n- [arxiv'25] [通过符号张量图可扩展地合成分布式 LLM 工作负载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.10480)\n- [MICRO'25] [PyTorchSim：一个全面、快速且精确的 NPU 仿真框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756045)\n- [MICRO'25] [具有细粒度误差建模和层次聚类的快速可靠的大规模 GPU 仿真](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3757107)\n- [arxiv'25] [Frontier：模拟下一代 LLM 推理系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03148v1)\n- [NAIC @ SIGCOMM'25] [MLSynth：迈向合成 ML 跟踪数据](https:\u002F\u002Faliireza.github.io\u002Ffiles\u002Fmlsynth-naic25.pdf)\n- [NAIC @ SIGCOMM'25] [针对异构计算与网络基础设施的 LLM 训练工作负载仿真](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3748273.3749212)\n- [arxiv'25] [Frontier：模拟下一代 LLM 推理系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.03148)\n- [arxiv'25] [Maya：利用模拟虚拟加速器优化深度学习训练工作负载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20191)\n- [NSDI'25] [借助多实验并行仿真加速 LLM 训练系统的设计空间探索](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fgui)\n- [ASPLOS'25] [面向深度学习训练与推理的 GPU 性能预测](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3669940.3707265)\n- [MLSys'24] [Vidur：一个用于 LLM 推理的大规模仿真框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05465)\n\n## 面向代理型AI的系统\n- [arxiv'26] [LRAgent：面向多LoRA LLM代理的高效KV缓存共享](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.01053)\n- [arxiv'26] [VisGym：面向多模态代理的多样化、可定制、可扩展环境](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.16973)\n- [arxiv'26] [ToolCaching：迈向LLM工具调用的高效缓存](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2601.15335)\n- [arxiv'26] [迈向高效代理：记忆、工具学习与规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14192)\n- [arxiv'26] [Sutradhara：基于工具的代理推理的智能编排引擎协同设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12967)\n- [arxiv'26] [超越最大令牌数：通过LLM代理中的工具调用链实现隐蔽的资源放大](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10955)\n- [arxiv'26] [XGrammar 2：面向代理型LLM的动态且高效的结构化生成引擎](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.04426)\n- [arxiv'26] [Nalar：一个代理服务框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.05109)\n- [arxiv'26] [软件定义的代理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.03197)\n- [NSDI'26] [Agentix：作为通用程序的LLM代理的高效服务引擎](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi26\u002Fpresentation\u002Fluo)\n- [arxiv'25] [ToolOrchestra：通过高效的模型与工具编排提升智能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21689)\n- [arxiv'25] [Nemotron 3 Nano：开放、高效的混合专家架构Mamba-Transformer模型，用于代理推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.20848)\n- [arxiv'25] [迈向高效代理：推理架构与系统的协同设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.18337)\n- [arxiv'25] [超越训练：借助MOBIMEM实现代理的自我进化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.15784)\n- [arxiv'25] [通过推测性工具调用优化代理语言模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.15834)\n- [arxiv'25] [Astraea：面向LLM驱动代理的状态感知调度引擎](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14142)\n- [arxiv'25] [生产环境中代理的度量](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04123)\n- [arxiv'25] [Matrix：点对点多智能体合成数据生成框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.21686)\n- [arxiv'25] [Aragog：面向代理工作流规模化服务的即时模型路由](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.20975)\n- [arxiv'25] [AccelOpt：用于AI加速器内核优化的自改进LLM代理系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15915)\n- [ML for Systems @ NeurIPS'25] [Agentic Bridge框架：弥合代理能力与性能基准之间的差距](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Rv664iOMNv)\n- [arxiv'25] [Continuum：基于KV缓存生存时间的高效稳健多轮LLM代理调度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.02230)\n- [arxiv'25] [Sherlock：可靠高效的代理工作流执行](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00330)\n- [arxiv'25] [以CPU为中心的代理AI视角](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00739)\n- [SAA'25] 有用的代理AI：系统视角\n- [SAA'25] 为代理探索奠定系统基础\n- [SAA'25] 支持我们的AI霸主：重新设计数据系统，使其以代理为先\n- [SAA'25] Cortex：面向代理服务的工作流感知资源池化与调度\n- [SAA'25] Tetris：面向代理和推理负载的高效预测性KV缓存卸载\n- [SAA'25] 多模态模型训练的GPU内存预测\n- [SAA'25] DMAS-Forge：将AI应用透明部署为分布式系统的框架\n- [SAA'25] 基于MCP的代理自动化注释推理\n- [SAA'25] EARL：面向大型语言模型的高效代理强化学习系统\n- [SAA'25] 统一的代理接口足以实现AI代理的可观测性\n- [arxiv'25] [Flash-Searcher：基于DAG并行执行的快速有效网络代理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25301)\n- [arxiv'25] [MobiAgent：面向可定制移动代理的系统化框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00531)\n- [ICML'25] [伯克利函数调用排行榜（BFCL）：从工具使用到大型语言模型的代理评估](https:\u002F\u002Fopenreview.net\u002Fforum?id=2GmDdhBdDk)\n- [SIGCOMM'25] [基于多智能体LLM的意图驱动网络管理：孔子框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3718958.3750537)\n- [arxiv'25] [rStar2-Agent：代理推理技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20722)\n- [COLM'25] [R2E-Gym：面向开放式权重SWE代理规模化的程序化环境与混合验证器](https:\u002F\u002Fopenreview.net\u002Fpdf?id=7evvwwdo3z)\n- [arxiv'25] [利用异构系统实现高效可扩展的代理AI](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.19635)\n- [arxiv'25] [Agent.xpu：在异构SoC上高效调度代理LLM工作负载](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.24045)\n- [arxiv'25] [GSO：具有挑战性的软件优化任务，用于评估SWE-代理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23671)\n- [ASPLOS'25] [ReCA：面向实时高效协作式具身自主代理的集成加速](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3676641.3716016)\n- [arxiv'25] [过度思考的危害：审视代理任务中的推理-行动困境](https:\u002F\u002Fwww.arxiv.org\u002Fabs\u002F2502.08235)\n- [arxiv'24] [AI大都市：利用乱序执行扩展基于大型语言模型的多智能体仿真](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03519)\n- [ICML'24] [AnyTool：面向大规模API调用的自我反思、分层代理](https:\u002F\u002Fproceedings.mlr.press\u002Fv235\u002Fdu24h.html)\n\n## 强化学习后训练\n- [ICLR'26] [重新审视大模型后训练中的参数服务器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.19362)\n- [arxiv'26] [Jet-RL：通过统一的训练与采样精度流实现策略内FP8强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.14243)\n- [arxiv'26] [通过滞后期约束的采样协调释放高效异步强化学习后训练潜能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.12784)\n- [arxiv'26] [OrchestrRL：面向解耦架构的强化学习动态计算与网络编排](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.01209)\n- [arxiv'25] [HetRL：异构环境下大模型的高效强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.12476)\n- [arxiv'25] [ThreadWeaver：用于语言模型高效并行推理的自适应线程技术](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.07843)\n- [arxiv'25] [RLHFSpec：通过自适应草稿机制打破RLHF训练中的效率瓶颈](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.04752)\n- [arxiv'25] [通过解耦与Best-of-N推测实现快速大模型后训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16193)\n- [arxiv'25] [驯服长尾问题：基于自适应草稿器的高效推理型强化学习训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.16665)\n- [arxiv'25] [击败长尾：面向分布感知的强化学习训练推测解码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.13841)\n- [arxiv'25] [WeChat-YATT：一个可扩展、简单、高效且生产就绪的训练库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.07970)\n- [arxiv'25] [未走过的路：RLVR可证明地从原则中学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.08567)\n- [arxiv'25] [AReaL-Hex：支持在异构GPU上进行异步强化学习训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.00796)\n- [NeurIPS'25] [贪婪采样在RLHF中被证明是高效的](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.24700)\n- [arxiv'25] [当奖励模型不确定时，请咨询强大的大模型裁判](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20369)\n- [arxiv'25] [RLBoost：利用抢占式资源实现大模型上的低成本强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.19225)\n- [arxiv'25] [Laminar：一个可扩展的异步强化学习后训练框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.12633)\n- [arxiv'25] [大模型强化学习计算规模化的艺术](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.13786)\n- [arxiv'25] [xRouter：基于强化学习的训练成本感知大模型编排系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.08439)\n- [arxiv'25] [混合强化学习：当奖励稀疏时，密集奖励更优](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.07242)\n- [arxiv'25] [从失败中学习：通过故障感知逆向强化学习理解大模型对齐](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.06092)\n- [arxiv'25] [虚假奖励：重新思考RLVR中的训练信号](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.10947)\n- [arxiv'25] [SFT-RL后训练中的困境：高SFT分数为何会误导，以及应改用什么](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01624)\n- [arxiv'25] [野外的强化学习：刻画大模型部署中的RLVR训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.25279)\n- [arxiv'25] [APRIL：强化学习中的主动部分采样以驯服长尾生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18521v1)\n- [NeurIPS'25] [AReaL：面向高效且可扩展语言推理的异步强化学习](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2025\u002Fposter\u002F117538)\n- [arxiv'25] [ToRL：规模化工具集成强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.23383)\n- [arxiv'25] [VerlTool：迈向全面的具身强化学习与工具使用](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.01055v1)\n- [arxiv'25] [Parallel-R1：迈向基于强化学习的并行思维](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.07980)\n- [综述 :mag:] [arxiv'25] [大型推理模型强化学习综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08827)\n- [arxiv'25] [RewardDance：视觉生成中的奖励缩放](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.08826)\n- [arxiv'25] [floq：通过流匹配训练批评者以扩展基于价值的强化学习计算能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.06863)\n- [arxiv'25] [ParaThinker：原生并行思维作为扩展大模型推理时计算的新范式](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.04475)\n- [arxiv'25] [历史重演：借助RhymeRL加速大模型强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.18588)\n- [COLM'25] [通过主动探索实现大模型中高效的偏好对齐](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Vi5cIfIslX)\n- [COLM'25] [合成数据生成与多步强化学习用于推理和工具使用](https:\u002F\u002Fopenreview.net\u002Fpdf?id=oN9STRYQVa)\n- [arxiv'25] [SeamlessFlow：一种训练代理隔离的强化学习框架，通过标签调度实现无阻塞流水线](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.11553)\n- [arxiv'25] [SPECS：通过推测草稿实现更快的推理时扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.15733)\n- [arxiv'25] [平衡的智能体初始化：稳定蒸馏型推理模型的RLHF训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00309)\n- [COLM'25] [针对人类反馈强化学习的离策略修正奖励建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.15507)\n- [arxiv'25] [ReTool：面向大模型战略工具使用的强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11536)\n- [IPDPS'25] [FlexRLHF：一个灵活的布局与并行性框架，用于高效RLHF训练](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F11078517)\n- [arxiv'25] [GEPA：反思式提示进化可超越强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.19457)\n- [ACL'25] [RLKGF：无需人工标注的知识图谱反馈强化学习](https:\u002F\u002Faclanthology.org\u002F2025.findings-acl.344.pdf)\n- [arxiv'25] [多模块GRPO：将策略梯度与提示优化相结合用于语言模型程序](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04660)\n- [arxiv'25] [将强化学习扩展到长视频](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.07966)\n- [arxiv'25] [正确执行推理时训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23884)\n- [arxiv'25] [LlamaRL：一个分布式异步强化学习框架，用于高效的大规模大模型训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.24034)\n- [arxiv'25] [具有最优奖励基线的策略内强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.23585)\n- [arxiv'25] [StreamRL：面向大模型的可扩展、异构且弹性强化学习，支持解耦流生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.15930)\n- [arxiv'25] [DAPO：一个大规模开源大模型强化学习系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.14476)\n- [MLSys'25] [ReaL：通过参数重分配实现大型语言模型高效RLHF训练](https:\u002F\u002Fopenreview.net\u002Fforum?id=yLU1zRf95d)\n- [arxiv'25] [奖励推理模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14674)\n- [arxiv'24] [通过阶段融合优化大型语言模型的RLHF训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.13221)\n\n## 多模态\nhttps:\u002F\u002Fgithub.com\u002Ffriedrichor\u002FAwesome-Multimodal-Papers\n\n- [arxiv'26] [vLLM-Omni: 用于任意模态到任意模态多模态模型的完全解耦推理服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.02204)\n- [arxiv'26] [VisGym: 针对多模态智能体的多样化、可定制且可扩展环境](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.16973)\n- [arxiv'26] [EPD-Serve: 基于昇腾平台的灵活多模态 EPD 解耦推理服务系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.11590)\n- [ASPLOS'26] [大规模视频 DiT 训练中的动态稀疏性](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3760250.3762216)\n- [arxiv'25] [Cornserve: 高效服务于任意模态到任意模态的多模态模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.14098)\n- [arxiv'25] [FoundationMotion: 视频中空间运动的自动标注与推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.10927)\n- [arxiv'25] [MoDES: 通过动态专家跳过加速混合专家多模态大语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.15690)\n- [SoCC'25] [ModServe: 面向可扩展多模态模型推理的模态与阶段感知资源解耦架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00937)\n- [arxiv'25] [FlowMM: 基于跨模态信息流指导的 KV 缓存合并，用于高效多模态上下文推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.05534)\n- [arxiv'25] [OmniVinci: 针对全模态理解大语言模型的架构与数据增强](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.15870)\n- [arxiv'25] [Fast-dLLM v2: 高效的块扩散大语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26328)\n- [arxiv'25] [Fast-dLLM: 通过启用 KV 缓存和并行解码实现无训练加速的扩散大语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22618)\n- [arxiv'25] [Mordal: 面向视觉语言模型的自动化预训练模型选择](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.00241)\n- [arxiv'25] [Dimple: 具有并行解码能力的离散扩散多模态大语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.16990)\n- [arxiv'24] [LlamaFusion: 将预训练语言模型适配用于多模态生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15188)\n- [综述 :mag:] [arxiv'24] [资源高效的大语言模型与多模态基础模型综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.08092)\n\n## 混合型大语言模型\n- [MICRO'25] [HLX: 面向混合 Transformer-Mamba 语言模型优化性能的统一流水线架构](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756115)\n- [MLSys'25] [Marconi: 混合型大语言模型时代的前缀缓存技术](https:\u002F\u002Fmlsys.org\u002Fvirtual\u002F2025\u002Fposter\u002F3260)\n\n## 其他\n- [arxiv'26] [长期监控内核与硬件事件以理解延迟波动](https:\u002F\u002Farxiv.org\u002Fabs\u002F2601.10572)\n- [ASPLOS'26] [cuJSON：面向GPU的高度并行JSON解析器](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3760250.3762222)\n- [arxiv'25] [Cyclotron：将递归计算编译为分布式与脉动阵列架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.09987)\n- [arxiv'25] [流式张量程序：用于动态并行性的流式抽象](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.07776)\n- [arxiv'25] [OckBench：衡量LLM推理效率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.05722)\n- [SC workshop'25] [紧密耦合CPU-GPU超级芯片的Roofline分析：以MI300A和GH200为例的研究](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3731599.3767497)\n- [NeurIPS'25] [Spark Transformer：在FFN与注意力机制中重新激活稀疏性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.06644)\n- [MICRO'25] [ORCHES：基于协同GPU-PIM异构系统的测试时计算编排型LLM推理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3725843.3756039)\n- [arxiv'25] [vAttention：经过验证的稀疏注意力机制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05688)\n- [USENIX ;login:] [晶圆级AI计算：系统软件视角](https:\u002F\u002Fwww.usenix.org\u002Fpublications\u002Floginonline\u002Fwafer-scale-ai-compute-system-software-perspective)\n- [arxiv'25] [训练大型语言模型，使其通过全局分叉标记并行推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05132)\n- [arxiv'25] [如何训练你的导师：用指导模型引导黑盒LLM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.02453)\n- [arxiv'25] [Slm-mux：编排小型语言模型进行推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.05077)\n- [arxiv'25] [语言模型的混合架构：系统性分析与设计启示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04800)\n- [arxiv'25] [少即是多：利用微型网络进行递归推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.04871v1)\n- [arxiv'25] [ThinKV：面向高效推理模型的思维自适应KV缓存压缩](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01290)\n- [arxiv'25] [重新思考思维标记：将LLM视为改进算子](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01123)\n- [arxiv'25] [具有相互依赖世代的广义并行扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.01143)\n- [arxiv'25] [Composer：混合神经架构设计的搜索框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.00379)\n- [arxiv'25] [dParallel：面向dLLM的可学习并行解码](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.26488)\n- [NeurIPS'25] [深思熟虑且精准：通过替代性推测解码实现卸载LLM的无损、无需训练加速](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.18344)\n- [arxiv'25] [AI工厂：是时候重新思考云与HPC的界限了](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.12849)\n- [arxiv'25] [面向高吞吐量多LLM服务的高效无训练在线路由](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.02718)\n- [arxiv'25] [SharedRep-RLHF：一种基于共享表示的、支持多样化偏好的RLHF方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.03672)\n- [arxiv'25] [学会精炼：LLM中的并行推理自我精炼](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00084)\n- [arxiv'25] [LLaVA-Critic-R1：你的批评者模型其实是一个强大的策略模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2509.00676)\n- [arxiv'25] [DeepScholar-Bench：生成式研究综述的实时基准测试与自动化评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.20033)\n- [VLDB'25] [强大的GPU还是高速互连？现代GPU上关系型工作负载的分析](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol18\u002Fp4350-kabic.pdf)\n- [arxiv'25] [少即是多：利用全局局部性实现高效推理的无训练稀疏注意力机制](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.07101)\n- [arxiv'25] [基于难度的偏好数据选择：由DPO隐式奖励差距驱动](https:\u002F\u002Farxiv.org\u002Fabs\u002F2508.04149)\n- [arxiv'25] [LobRA：面向异质数据的多租户微调](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol18\u002Fp2616-fu.pdf)\n- [arxiv'25] [Copilot Arena：野外代码LLM评估平台](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09328)\n- [arxiv'25] [ElasticMM：采用弹性多模态并行性的高效多模态LLM服务](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10069)\n- [MICRO'25] [Pimba：面向后Transformer时代大型语言模型服务的内存内计算加速](https:\u002F\u002Farxiv.org\u002Fabs\u002F2507.10178)\n- [CFAgentic @ ICML'25] [LLMSELECTOR：在复合AI系统中学习选择模型](https:\u002F\u002Fopenreview.net\u002Fpdf?id=NphowWHYJj)\n- [arxiv'25] [Libra：协同CUDA与Tensor Core实现高性能稀疏矩阵乘法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.22714)\n- [arxiv'25] [Prompt-to-Leaderboard：提示自适应LLM评估](https:\u002F\u002Fopenreview.net\u002Fforum?id=7VPRrzFEN8) [[代码](https:\u002F\u002Fgithub.com\u002Flmarena\u002Fp2l)]\n- [ISCA'25] [Meta第二代AI芯片：模型-芯片协同设计与生产化经验](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731409)\n- [ISCA'25] [破除CUDA神话，迈向基于GPU的AI系统](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3695053.3731050)\n- [ISCA'25] [UGPU：动态构建非平衡GPU以提升资源效率](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3695053.3731103)\n- [arxiv'25] [SeerAttention-R：面向长时推理的稀疏注意力适配](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08889)\n- [arxiv'25] [强化预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08007)\n- [arxiv'25] [MemOS：大型语言模型中面向记忆增强生成（MAG）的操作系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.22101)\n- [NSDI'25] [通过阶段融合优化大型语言模型的RLHF训练](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fnsdi25\u002Fpresentation\u002Fzhong)\n- [arxiv'25] [短而正确地思考，而非冗长：高效准确地服务LLM推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13326)\n- [arxiv'25] [借助可训练的稀疏注意力加速视频扩散](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13389)\n- [arxiv'25] [SSR：测试时的推测性并行扩展推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15340)\n- [arxiv'25] [Hunyuan-TurboS：通过Mamba-Transformer协同与自适应思维链推进大型语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.15431)\n- [arxiv'25] [仅在需要时才思考：大型混合推理模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14631)\n- [MLSys'25] [优化关系型数据分析工作负载中的LLM查询](https:\u002F\u002Fopenreview.net\u002Fforum?id=R7bK9yycHp)\n- [arxiv'25] [让RL重拾价值：通过统一LLM推理者与验证者实现更好的测试时扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.04842)\n- [arxiv'25] [借助NonGEMM工作负载理解最新ML工作负载的性能边界](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.11788)\n- [arxiv'25] [处理会思考的奖励模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.16828)\n- [arxiv'25] [Seed-Thinking-v1.5：利用强化学习推进卓越推理模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13914)\n- [arxiv'25] [休眠期计算：超越测试时的推理扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.13171)\n- [arxiv'25] [SpecReason：通过推测性推理实现快速准确的推理时计算](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07891)\n- [arxiv'25] [原生多模态模型的缩放法则](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07951)\n- [arxiv'25] [OLMoTrace：追踪语言模型输出至数万亿训练标记](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.07096)\n- [arxiv'25] [NotebookOS：面向交互式训练的笔记本操作系统，配备按需GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.20591)\n- [arxiv'25] [Alchemist：迈向高效在线持续学习系统的设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.01066)\n- [arxiv'25] [线性注意力：用于高效双向序列建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.16249)\n- [arxiv'25] [S*：代码生成的测试时扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14382)\n- [arxiv'25] [优化复合AI系统中的模型选择](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.14815)\n- [arxiv'25] [Copilot Arena：野外代码LLM评估平台](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.09328)\n- [arxiv'25] [Efficient-vDiT：带有注意力瓦片的高效视频扩散Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.06155)\n- [arxiv'25] [BARE：结合基础与指令调优语言模型，以更好地生成合成数据](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01697)\n- [arxiv'25] [Sparse VideoGen：利用时空稀疏性加速视频扩散Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.01776)\n- [arxiv'25] [基于VectorQ的自适应语义提示缓存](https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.03771)\n- [EuroSys'25] [HybridFlow：灵活高效的RLHF框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3689031.3696075)\n- [arxiv'25] [更深层次地测量GPU利用率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.16909)\n- [ASPLOS'25] [PipeLLM：通过推测性流水线加密提供快速且私密的大语言模型服务](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3669940.3707224)\n- [arxiv'24] [更小、更弱，却更好：通过计算最优采样训练LLM推理者](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.16737)\n- [arxiv'24] [破除CUDA神话，迈向基于GPU的AI系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.00210)\n- [arxiv'24] [XGrammar：面向大型语言模型的灵活高效结构化生成引擎](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.15100)\n- [CPAL'24 (PMLR)] [Jaxpruner：简洁的稀疏性研究库](https:\u002F\u002Fproceedings.mlr.press\u002Fv234\u002Flee24a.html)\n- [arxiv'24] [Scorch：稀疏深度学习库](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.16883)\n- [arxiv'24] [淹没在文档中：重排序器推理规模化的后果](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.11767)\n- [arxiv'24] [通过向LLM提问来构建语言神经科学的可解释嵌入](https:\u002F\u002Fopenreview.net\u002Fforum?id=mxMvWwyBWe)\n- [arxiv'24] [小规模大型语言模型训练的计算瓶颈](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.19456)\n- [Survey :mag:] [arxiv'24] [大型语言模型时代的小型语言模型综合调查：技术、增强、应用、与LLM的合作以及可信度](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.03350)\n- [NeurIPS'24] [是否只需增加LLM调用次数即可？迈向复合推理系统的缩放法则](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02419)\n- [arxiv'24] [随机猴子在作祟：廉价的随机增强破坏了LLM的安全对齐](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02785)\n- [arxiv'24] [DroidSpeak：增强跨LLM通信](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.02820v1)\n- [arxiv'24] [利用FlexEMR解聚嵌入推荐系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12794)\n- [arxiv'24] [JudgeBench：评估基于LLM的法官的基准测试](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12784)\n- [arxiv'24] [只需一步：通过Scale Distillation实现Stable Diffusion的快速超分辨率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.17258)\n- [arxiv'24] [大型生成式模型时代的计算：从云原生到AI原生](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.12230)\n- [ATC'24] [Centimani：通过新型性能预测器实现DNN训练中快速AI加速器的选择](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc24\u002Fpresentation\u002Fxie)\n- [arxiv'23] [使用SGLang高效编程大型语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.07104)\n- [MICRO'23] [超越模拟器的路径：针对DNN工作负载的快速准确GPU执行时间预测](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3613424.3614277)\n- [arxiv'23] [直接偏好优化：你的语言模型其实是一个奖励模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18290)\n- [arxiv'22] [通过人类反馈训练语言模型遵循指令](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.02155)\n\n# 参考资料\n本仓库的灵感来源于：\n- https:\u002F\u002Fgithub.com\u002FHuaizhengZhang\u002FAwesome-System-for-Machine-Learning\n- https:\u002F\u002Fgithub.com\u002FS-Lab-System-Group\u002FAwesome-DL-Scheduling-Papers\n- https:\u002F\u002Fgithub.com\u002Fganler\u002FResearchReading\n- https:\u002F\u002Fjeongseob.github.io\u002Freadings_mlsys.html\n- https:\u002F\u002Fgithub.com\u002Fchwan1016\u002Fawesome-gnn-systems\n- https:\u002F\u002Fgithub.com\u002FConnollyLeon\u002Fawesome-Auto-Parallelism","# ml-systems-papers 快速上手指南\n\n`ml-systems-papers` 并非一个可执行的软件工具或代码库，而是一个**机器学习系统（Machine Learning Systems）领域的学术论文清单**。它由社区维护，旨在帮助研究人员和工程师快速定位数据处理、训练系统、推理优化、大模型架构等方向的前沿论文。\n\n因此，本指南将指导你如何获取、浏览和利用这份宝贵的资源列表。\n\n## 环境准备\n\n本项目无需安装任何运行时依赖或特定操作系统，只需具备以下条件即可：\n\n*   **硬件要求**：无特殊要求，任意能运行浏览器的设备均可。\n*   **前置依赖**：\n    *   稳定的网络连接（用于访问 GitHub 和论文链接）。\n    *   Git（可选，仅当你需要克隆仓库到本地时）。\n    *   Markdown 阅读器或直接使用 Web 浏览器。\n\n## 获取与安装步骤\n\n你可以通过以下两种方式访问该论文列表：\n\n### 方式一：在线浏览（推荐）\n直接访问 GitHub 仓库页面，这是最快捷的方式，内容会实时同步最新提交。\n\n1.  打开浏览器访问项目主页：\n    ```text\n    https:\u002F\u002Fgithub.com\u002Fbyungsoo-oh\u002Fml-systems-papers\n    ```\n2.  在 `README.md` 中查看按主题分类的论文目录。\n\n### 方式二：本地克隆\n如果你希望离线阅读或通过 Pull Request 贡献新的论文，可以克隆仓库到本地。\n\n1.  打开终端，执行以下命令克隆仓库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fbyungsoo-oh\u002Fml-systems-papers.git\n    ```\n    *(国内用户若遇网络缓慢，可使用镜像源加速)*\n    ```bash\n    git clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002Fml-systems-papers.git\n    # 注意：若 Gitee 无对应镜像，建议使用 git clone --depth=1 减小下载体积\n    ```\n\n2.  进入项目目录：\n    ```bash\n    cd ml-systems-papers\n    ```\n\n3.  使用任意文本编辑器或 Markdown 预览工具打开 `README.md` 文件。\n\n## 基本使用\n\n该项目的使用核心在于**按需检索**和**追踪前沿**。\n\n### 1. 按主题查找论文\n项目已将论文划分为多个关键领域。打开 `README.md` 后，利用目录（Table of Contents）快速跳转至你感兴趣的部分。主要涵盖领域包括：\n\n*   **Data Processing**: 数据流水线优化、缓存策略、LLM 数据平面。\n*   **Training System**: GPU 集群调度、分布式训练、AutoML、GNN 训练系统。\n*   **Inference System & Optimization**: 推理系统、注意力机制优化、MoE (混合专家模型)。\n*   **LLM Specifics**: 长上下文处理、模型压缩、RAG (检索增强生成)。\n*   **Infrastructure**: 通信优化、容错处理、GPU 内存管理与共享。\n\n### 2. 识别论文类型\n在阅读列表时，注意以下标记：\n*   **[Survey 🔍]**: 表示该条目为综述论文，适合快速了解某个子领域的全貌。\n*   **[Year]**: 方括号内的年份（如 `[arxiv'25]`, `[OSDI'24]`）标明了论文的发表年份或会议，有助于筛选最新成果。\n\n### 3. 获取论文全文\n每个条目都包含了论文标题和链接。\n*   点击标题链接（通常指向 arXiv、会议官网或 PDF 文件）。\n*   例如，若想研究“数据加载优化”，可在 **Data pipeline optimization** 章节找到：\n    > [arxiv'25] [OVERLORD: Ultimate Scaling of DataLoader for Multi-Source Large Foundation Model Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.09844)\n    \n    点击链接即可直达 arXiv 页面下载 PDF。\n\n### 4. 贡献新论文（可选）\n如果你发现了未收录的高质量论文，欢迎通过 GitHub Pull Request 贡献：\n1.  Fork 本仓库。\n2.  在对应的分类下添加论文条目，格式参考现有条目：\n    ```markdown\n    - [Conference'Year] [Paper Title](Link)\n    ```\n3.  提交 PR 等待合并。\n\n---\n*提示：由于这是一个动态更新的列表，建议定期拉取最新代码或刷新网页以获取最新的学术成果。*","某大型电商公司的算法团队正在构建下一代超大规模推荐系统，面临海量数据预处理导致的 GPU 训练频繁空闲瓶颈。\n\n### 没有 ml-systems-papers 时\n- 团队在解决数据加载延迟时盲目尝试，缺乏对“数据流水线优化”领域前沿方案（如 Plumber 或 tf.data 机制）的系统性认知。\n- 难以区分哪些是学术界已验证的成熟架构，哪些是实验性想法，导致在错误的技术路线上浪费数周研发资源。\n- 对于多源异构数据下的缓存策略和分布式存储方案，只能依赖零散的博客文章，无法找到针对大规模深度推荐模型的专业论文支撑。\n- 错过了如\"Streaming Batch Model\"等能显著提升容错性和执行效率的最新成果，系统稳定性长期得不到根本改善。\n\n### 使用 ml-systems-papers 后\n- 工程师通过\"Data Processing\"分类快速定位到 SIGMOD 和 VLDB 上的关键论文，直接复用了经过验证的数据流水线诊断与去除瓶颈方法。\n- 借助清晰的目录结构和 [Survey 🔍] 标记，团队迅速掌握了该领域的技术全景，将选型调研时间从数周缩短至两天。\n- 参考列表中关于\"Caching and distributed storage\"的最新研究，设计了适配多租户场景的智能缓存层，彻底消除了 GPU 等待数据的“饥饿”现象。\n- 及时引入了 arxiv'25 最新发表的 OVERLORD 等前沿方案，实现了多源大模型训练数据加载的终极扩展，显著提升了集群吞吐量。\n\nml-systems-papers 将分散的学术智慧转化为可落地的工程指南，帮助团队用最短路径解决了制约算力效率的核心痛点。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbyungsoo-oh_ml-systems-papers_d999a757.png","byungsoo-oh","Byungsoo Oh","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbyungsoo-oh_280c6319.png","CS Ph.D. Student @ Cornell University","Cornell University","Ithaca, NY","byungsoo@cs.cornell.edu",null,"https:\u002F\u002Fbyungsoo-oh.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fbyungsoo-oh",542,36,"2026-04-16T00:24:23",1,"","未说明",{"notes":89,"python":87,"dependencies":90},"该仓库是一个机器学习系统领域的论文列表（Paper List），并非可执行的软件工具或代码库。它主要收录了关于数据处理、训练系统、推理系统、编译器优化等主题的学术论文链接。因此，该仓库本身没有操作系统、GPU、内存、Python 版本或依赖库的安装运行需求。用户只需具备浏览器即可访问其中的论文链接，或克隆仓库查看 Markdown 文档。",[],[14],[93,94,95],"awesome-list","awesome-papers","machine-learning-systems","2026-03-27T02:49:30.150509","2026-04-19T15:46:31.847224",[],[]]