[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-merrymercy--awesome-tensor-compilers":3,"tool-merrymercy--awesome-tensor-compilers":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",153609,2,"2026-04-13T11:34:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":83,"forks":84,"last_commit_at":85,"license":82,"difficulty_score":86,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":93,"view_count":32,"oss_zip_url":82,"oss_zip_packed_at":82,"status":17,"created_at":101,"updated_at":102,"faqs":103,"releases":104},7174,"merrymercy\u002Fawesome-tensor-compilers","awesome-tensor-compilers","A list of awesome compiler projects and papers for tensor computation and deep learning.","awesome-tensor-compilers 是一个精心整理的开源资源清单，专注于张量计算与深度学习领域的编译器项目及相关学术论文。在深度学习模型日益复杂、硬件架构多样化的背景下，如何将算法高效地部署到不同芯片上成为一大难题。这份清单正是为了解决这一痛点而生，它系统性地汇集了从底层代码生成、自动调优到图级优化等关键环节的优秀成果。\n\n资源库中不仅收录了 TVM、MLIR、XLA、Triton 等业界主流的开源编译器框架，还分类整理了涵盖编译器设计、成本模型、稀疏计算及量化等前沿方向的学术文献。无论是希望深入理解编译原理的研究人员，还是需要将模型高效部署到 CPU、GPU 或 NPU 的开发者，都能从中快速找到合适的工具或理论参考。\n\n其独特价值在于“全”与“新”：既提供了端到端的工程实践方案，又追踪了如动态形状编译、超优化器等最新技术趋势。对于想要进入深度学习编译器领域的新手，它也是一份极佳的入门指南，帮助使用者少走弯路，直接触达社区核心资源。","# Awesome Tensor Compilers\n![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)\n[![Maintenance](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-YES-green.svg)](https:\u002F\u002Fgithub.com\u002Fmerrymercy\u002Fawesome-tensor-compilers\u002Fgraphs\u002Fcommit-activity)\n\nA list of awesome compiler projects and papers for tensor computation and deep learning. \n\n## Contents\n- [Open Source Projects](#open-source-projects)\n- [Papers](#papers)\n  - [Survey](#survey)\n  - [Compiler and IR Design](#compiler-and-ir-design)\n  - [Auto-tuning and Auto-scheduling](#auto-tuning-and-auto-scheduling)\n  - [Cost Model](#cost-model)\n  - [CPU & GPU Optimization](#cpu-and-gpu-optimization)\n  - [NPU Optimization](#npu-optimization)\n  - [Graph-level Optimization](#graph-level-optimization)\n  - [Dynamic Model](#dynamic-model)\n  - [Graph Neural Networks](#graph-neural-networks)\n  - [Distributed Computing](#distributed-computing)\n  - [Quantization](#quantization)\n  - [Sparse](#sparse)\n  - [Program Rewriting](#program-rewriting)\n  - [Verification and Testing](#verification-and-testing)\n- [Tutorials](#tutorials)\n- [Contribute](#contribute)\n\n## Open Source Projects\n- [TVM: An End to End Machine Learning Compiler Framework](https:\u002F\u002Ftvm.apache.org\u002F)\n- [MLIR: Multi-Level Intermediate Representation](https:\u002F\u002Fmlir.llvm.org\u002F)\n- [XLA: Optimizing Compiler for Machine Learning](https:\u002F\u002Fwww.tensorflow.org\u002Fxla)\n- [Halide: A Language for Fast, Portable Computation on Images and Tensors](https:\u002F\u002Fhalide-lang.org\u002F)\n- [Glow: Compiler for Neural Network Hardware Accelerators](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fglow)\n- [nnfusion: A Flexible and Efficient Deep Neural Network Compiler](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fnnfusion)\n- [Hummingbird: Compiling Trained ML Models into Tensor Computation](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fhummingbird)\n- [Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations](https:\u002F\u002Fgithub.com\u002Fopenai\u002Ftriton)\n- [AITemplate: A Python framework which renders neural network into high performance CUDA\u002FHIP C++ code](https:\u002F\u002Fgithub.com\u002Ffacebookincubator\u002FAITemplate)\n- [Hidet: A Compilation-based Deep Learning Framework](https:\u002F\u002Fgithub.com\u002Fhidet-org\u002Fhidet)\n- [Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code](http:\u002F\u002Ftiramisu-compiler.org\u002F)\n- [TensorComprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions](https:\u002F\u002Ffacebookresearch.github.io\u002FTensorComprehensions\u002F)\n- [PlaidML: A Platform for Making Deep Learning Work Everywhere](https:\u002F\u002Fgithub.com\u002Fplaidml\u002Fplaidml)\n- [BladeDISC: An End-to-End DynamIc Shape Compiler for Machine Learning Workloads](https:\u002F\u002Fgithub.com\u002Falibaba\u002FBladeDISC)\n- [TACO: The Tensor Algebra Compiler](http:\u002F\u002Ftensor-compiler.org\u002F)\n- [Nebulgym: Easy-to-use Library to Accelerate AI Training](https:\u002F\u002Fgithub.com\u002Fnebuly-ai\u002Fnebulgym)\n- [Speedster: Automatically apply SOTA optimization techniques to achieve the maximum inference speed-up on your hardware](https:\u002F\u002Fgithub.com\u002Fnebuly-ai\u002Fnebullvm\u002Ftree\u002Fmain\u002Fapps\u002Faccelerate\u002Fspeedster)\n- [NN-512: A Compiler That Generates C99 Code for Neural Net Inference](https:\u002F\u002Fnn-512.com\u002F)\n- [DaCeML: A Data-Centric Compiler for Machine Learning](https:\u002F\u002Fgithub.com\u002Fspcl\u002Fdaceml)\n- [Mirage: A Multi-level Superoptimizer for Tensor Algebra](https:\u002F\u002Fgithub.com\u002Fmirage-project\u002Fmirage)\n\n## Papers\n\n### Survey\n- [The Deep Learning Compiler: A Comprehensive Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03794) by Mingzhen Li et al., TPDS 2020\n- [An In-depth Comparison of Compilers for DeepNeural Networks on Hardware](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8782480) by Yu Xing et al., ICESS 2019\n\n### Compiler and IR Design\n- [(De\u002FRe)-Composition of Data-Parallel Computations via Multi-Dimensional Homomorphisms](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3665643) by Ari Rasch, TOPLAS 2024\n- [BladeDISC: Optimizing Dynamic Shape Machine Learning Workloads via Compiler Approach](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3617327) by Zhen Zheng et al., SIGMOD 2024\n- [Hidet: Task-Mapping Programming Paradigm for Deep Learning Tensor Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09603) by Yaoyao Ding et al., ASPLOS 2023\n- [TensorIR: An Abstraction for Automatic Tensorized Program Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.04296) by Siyuan Feng, Bohan Hou et al., ASPLOS 2023\n- [Exocompilation for Productive Programming of Hardware Accelerators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3519939.3523446) by Yuka Ikarashi, Gilbert Louis Bernstein et al., PLDI 2022\n- [DaCeML: A Data-Centric Compiler for Machine Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10802) by Oliver Rausch et al., ICS 22\n- [FreeTensor: A Free-Form DSL with Holistic Optimizations for Irregular Tensor Programs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3519939.3523448) by Shizhi Tang et al., PLDI 2022\n- [Roller: Fast and Efficient Tensor Compilation for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi22\u002Fpresentation\u002Fzhu) by Hongyu Zhu et al., OSDI 2022\n- [AStitch: Enabling a New Multi-dimensional Optimization Space for Memory-Intensive ML Training and Inference on Modern SIMT Architectures](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3503222.3507723) by Zhen Zheng et al., ASPLOS 2022\n- [Composable and Modular Code Generation in MLIR: A Structured and Retargetable Approach to Tensor Compiler Construction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.03293.pdf) by Nicolas Vasilache et al., arXiv 2022\n- [PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi21\u002Fpresentation\u002Fwang) by Haojie Wang et al., OSDI 2021\n- [MLIR: Scaling Compiler Infrastructure for Domain Specific Computation](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9370308) by Chris Lattner et al., CGO 2021\n- [A Tensor Compiler for Unified Machine Learning Prediction Serving](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fnakandala) by Supun Nakandala et al., OSDI 2020\n- [Rammer: Enabling Holistic Deep Learning Compiler Optimizations with rTasks](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi20\u002Fpresentation\u002Fma) by Lingxiao Ma et al., OSDI 2020\n- [Stateful Dataflow Multigraphs: A Data-Centric Model for Performance Portability on Heterogeneous Architectures](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10345) by Tal Ben-Nun et al., SC 2019\n- [TASO: The Tensor Algebra SuperOptimizer for Deep Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3341301.3359630) by Zhihao Jia et al., SOSP 2019\n- [Tiramisu: A polyhedral compiler for expressing fast and portable code](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10694) by Riyadh Baghdadi et al., CGO 2019\n- [Triton: an intermediate language and compiler for tiled neural network computations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3315508.3329973) by Philippe Tillet et al., MAPL 2019\n- [Relay: A High-Level Compiler for Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.08368) by Jared Roesch et al., arXiv 2019\n- [TVM: An Automated End-to-End Optimizing Compiler for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi18\u002Fpresentation\u002Fchen) by Tianqi Chen et al., OSDI 2018\n- [Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04730) by Nicolas Vasilache et al., arXiv 2018\n- [Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.08058) by Scott Cyphers et al., arXiv 2018\n- [Glow: Graph Lowering Compiler Techniques for Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.00907) by Nadav Rotem et al., arXiv 2018\n- [DLVM: A modern compiler infrastructure for deep learning systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.03016) by Richard Wei et al., arXiv 2018\n- [Diesel: DSL for linear algebra and neural net computations on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3211346.3211354) by Venmugil Elango et al., MAPL 2018\n- [The Tensor Algebra Compiler](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3133901) by Fredrik Kjolstad et al., OOPSLA 2017\n- [Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F2491956.2462176) by Jonathan Ragan-Kelley et al., PLDI 2013\n\n\n### Auto-tuning and Auto-scheduling\n- [Accelerated Auto-Tuning of GPU Kernels for Tensor Computations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3650200.3656626) by Chendi Li, Yufan Xu et al., ICS 2024\n- [Enabling Tensor Language Model to Assist in Generating High-Performance Tensor Programs for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fzhai) by Yi Zhai et al., OSDI 2024\n- [The Droplet Search Algorithm for Kernel Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3650109) by Michael Canesche et al., ACM TACO 2024\n- [Tensor Program Optimization with Probabilistic Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13603) by Junru Shao et al., NeurIPS 2022\n- [One-shot tuner for deep learning compilers](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3497776.3517774) by Jaehun Ryu et al., CC 2022 \n- [Autoscheduling for sparse tensor algebra with an asymptotic cost model](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3519939.3523442) by Peter Ahrens et al., PLDI 2022\n- [Bolt: Bridging the Gap between Auto-tuners and Hardware-native Performance](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2022\u002Fhash\u002F38b3eff8baf56627478ec76a704e9b52-Abstract.html) by Jiarong Xing et al., MLSys 2022\n- [A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3503222.3507767) by Dan Zhang et al., ASPLOS 2022\n- [Efficient Automatic Scheduling of Imaging and Vision Pipelines for the GPU](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3485486) by Luke Anderson et al., OOPSLA 2021\n- [Lorien: Efficient Deep Learning Workloads Delivery](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3472883.3486973) by Cody Hao Yu et al., SoCC 2021\n- [Value Learning for Throughput Optimization of Deep Neural Networks](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Fhash\u002F73278a4a86960eeb576a8fd4c9ec6997-Abstract.html) by Benoit Steiner et al., MLSys 2021\n- [A Flexible Approach to Autotuning Multi-Pass Machine Learning Compilers](https:\u002F\u002Fmangpo.net\u002Fpapers\u002Fxla-autotuning-pact2021.pdf) by Phitchaya Mangpo Phothilimthana et al., PACT 2021\n- [Ansor: Generating High-Performance Tensor Programs for Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06762) by Lianmin Zheng et al., OSDI 2020\n- [Schedule Synthesis for Halide Pipelines on GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3406117) by Sioutas Savvas et al., TACO 2020\n- [FlexTensor: An Automatic Schedule Exploration and Optimization Framework for Tensor Computation on Heterogeneous System](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3373376.3378508) by Size Zheng et al., ASPLOS 2020\n- [ProTuner: Tuning Programs with Monte Carlo Tree Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.13685) by Ameer Haj-Ali et al., arXiv 2020\n- [AdaTune: Adaptive tensor program compilation made efficient](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fuploads\u002Fprod\u002F2020\u002F10\u002Fnips20adatune.pdf) by Menghao Li et al., NeurIPS 2020\n- [Optimizing the Memory Hierarchy by Compositing Automatic Transformations on Computations and Data](https:\u002F\u002Fwww.microarch.org\u002Fmicro53\u002Fpapers\u002F738300a427.pdf) by Jie Zhao et al., MICRO 2020\n- [Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation](https:\u002F\u002Fopenreview.net\u002Fforum?id=rygG4AVFvH) by Byung Hoon Ahn et al., ICLR 2020\n- [A Sparse Iteration Space Transformation Framework for Sparse Tensor Algebra](http:\u002F\u002Ftensor-compiler.org\u002Fsenanayake-oopsla20-taco-scheduling.pdf) by Ryan Senanayake et al. OOPSLA 2020\n- [Learning to Optimize Halide with Tree Search and Random Programs](https:\u002F\u002Fhalide-lang.org\u002Fpapers\u002Fautoscheduler2019.html) by Andrew Adams et al., SIGGRAPH 2019\n- [Learning to Optimize Tensor Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08166) by Tianqi Chen et al., NeurIPS 2018\n- [Automatically Scheduling Halide Image Processing Pipelines](http:\u002F\u002Fgraphics.cs.cmu.edu\u002Fprojects\u002Fhalidesched\u002F) by Ravi Teja Mullapudi et al., SIGGRAPH 2016\n\n### Cost Model\n- [TLP: A Deep Learning-based Cost Model for Tensor Program Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.03578) by Yi Zhai et al., ASPLOS 2023\n- [An Asymptotic Cost Model for Autoscheduling Sparse Tensor Programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14947) by Peter Ahrens et al., PLDI 2022\n- [TenSet: A Large-scale Program Performance Dataset for Learned Tensor Compilers](https:\u002F\u002Fdatasets-benchmarks-proceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fa684eceee76fc522773286a895bc8436-Abstract-round1.html) by Lianmin Zheng et al., NeurIPS 2021\n- [A Deep Learning Based Cost Model for Automatic Code Optimization](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Fhash\u002F3def184ad8f4755ff269862ea77393dd-Abstract.html) by Riyadh Baghdadi et al., MLSys 2021\n- [A Learned Performance Model for the Tensor Processing Unit](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.01040) by Samuel J. Kaufman et al., MLSys 2021\n- [DYNATUNE: Dynamic Tensor Program Optimization in Deep Neural Network Compilation](https:\u002F\u002Fopenreview.net\u002Fforum?id=GTGb3M_KcUl) by Minjia Zhang et al., ICLR 2021\n- [MetaTune: Meta-Learning Based Cost Model for Fast and Efficient Auto-tuning Frameworks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.04199) by Jaehun Ryu et al., arxiv 2021\n- [Expedited Tensor Program Compilation Based on LightGBM](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1742-6596\u002F2078\u002F1\u002F012019) by Gonghan Liu1 et al., JPCS 2021\n\n### CPU and GPU Optimization\n- [DeepCuts: A deep learning optimization framework for versatile GPU workloads](https:\u002F\u002Fpldi21.sigplan.org\u002Fdetails\u002Fpldi-2021-papers\u002F13\u002FDeepCuts-A-Deep-Learning-Optimization-Framework-for-Versatile-GPU-Workloads) by Wookeun Jung et al., PLDI 2021\n- [Analytical characterization and design space exploration for optimization of CNNs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3445814.3446759) by Rui Li et al., ASPLOS 2021\n- [UNIT: Unifying Tensorized Instruction Compilation](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9370330) by Jian Weng et al., CGO 2021\n- [PolyDL: Polyhedral Optimizations for Creation of HighPerformance DL primitives](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02230) by Sanket Tavarageri et al., arXiv 2020\n- [Fireiron: A Data-Movement-Aware Scheduling Language for GPUs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3410463.3414632) by Bastian Hagedorn et al., PACT 2020\n- [Automatic Kernel Generation for Volta Tensor Cores](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12645) by Somashekaracharya G. Bhaskaracharya et al., arXiv 2020\n- [Swizzle Inventor: Data Movement Synthesis for GPU Kernels](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3297858.3304059) by Phitchaya Mangpo Phothilimthana et al., ASPLOS 2019\n- [Optimizing CNN Model Inference on CPUs](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc19\u002Fpresentation\u002Fliu-yizhi) by Yizhi Liu et al., ATC 2019\n- [Analytical cache modeling and tilesize optimization for tensor contractions](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3295500.3356218) by Rui Li et al., SC 19\n\n### NPU Optimization\n- [Heron: Automatically Constrained High-Performance Library Generation for Deep Learning Accelerators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3582016.3582061) by Jun Bi et al., ASPLOS 2023\n- [AMOS: Enabling Automatic Mapping for Tensor Computations On Spatial Accelerators with Hardware Abstraction](https:\u002F\u002Fcs.stanford.edu\u002F~anjiang\u002Fpapers\u002FZhengETAL22AMOS.pdf) by Size Zheng et al., ISCA 2022\n- [Towards the Co-design of Neural Networks and Accelerators](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2022\u002Fhash\u002F31fefc0e570cb3860f2a6d4b38c6490d-Abstract.html) by Yanqi Zhou et al., MLSys 2022\n- [AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations](https:\u002F\u002Fwww.di.ens.fr\u002F~zhaojie\u002Fpldi2021-paper) by Jie Zhao et al., PLDI 2021\n\n### Graph-level Optimization\n- [POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07697) by Shishir G. Patil et al., ICML 2022\n- [Collage: Seamless Integration of Deep Learning Backends with Automatic Placement](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00655) by Byungsoo Jeon et al., PACT 2022\n- [Apollo: Automatic Partition-based Operator Fusion through Layer by Layer Optimization](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2022\u002Fhash\u002F069059b7ef840f0c74a814ec9237b6ec-Abstract.html) by Jie Zhao et al., MLSys 2022\n- [Equality Saturation for Tensor Graph Superoptimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.01332) by Yichen Yang et al., MLSys 2021\n- [IOS: An Inter-Operator Scheduler for CNN Acceleration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01302) by Yaoyao Ding et al., MLSys 2021\n- [Optimizing DNN Computation Graph using Graph Substitutions](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.14778\u002F3407790.3407857) by Jingzhi Fang et al., VLDB 2020\n- [Transferable Graph Optimizers for ML Compilers](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Fhash\u002F9f29450d2eb58feb555078bdefe28aa5-Abstract.html) by Yanqi Zhou et al., NeurIPS 2020\n- [FusionStitching: Boosting Memory IntensiveComputations for Deep Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.10924) by Zhen Zheng et al., arXiv 2020\n- [Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2020\u002Fhash\u002F5f0ad4db43d8723d18169b2e4817a160-Abstract.html) by Woosuk Kwon et al., Neurips 2020\n\n### Dynamic Model\n- [Axon: A Language for Dynamic Shapes in Deep Learning Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02374) by Alexander Collins et al., arXiv 2022\n- [DietCode: Automatic Optimization for Dynamic Tensor Programs](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2022\u002Fhash\u002Ffa7cdfad1a5aaf8370ebeda47a1ff1c3-Abstract.html) by Bojian Zheng et al., MLSys 2022\n- [The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal Padding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10221) by Pratik Fegade et al., MLSys 2022\n- [Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.03031) by Haichen Shen et al., MLSys 2021\n- [DISC: A Dynamic Shape Compiler for Machine Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.05288) by Kai Zhu et al., EuroMLSys 2021\n- [Cortex: A Compiler for Recursive Deep Learning Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01383) by Pratik Fegade et al., MLSys 2021\n\n### Graph Neural Networks\n- [Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2022\u002Fhash\u002Fa87ff679a2f3e71d9181a67b7542122c-Abstract.html) by Zhiqiang Xie et al., MLSys 2022\n- [Seastar: vertex-centric programming for graph neural networks](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3447786.3456247) by Yidi Wu et al., Eurosys 2021 \n- [FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.11359) by Yuwei Hu et al., SC 2020\n\n### Distributed Computing\n- [SpDISTAL: Compiling Distributed Sparse Tensor Computations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.13901) by Rohan Yadav et al., SC 2022\n- [Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12023) by Lianmin Zheng, Zhuohan Li, Hao Zhang et al., OSDI 2022\n- [Unity: Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi22\u002Fpresentation\u002Funger) by Colin Unger, Zhihao Jia, et al., OSDI 2022\n- [Synthesizing Optimal Parallelism Placement and Reduction Strategies on Hierarchical Systems for Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10548) by Ningning Xie, Tamara Norman, Diminik Grewe, Dimitrios Vytiniotis et al., MLSys 2022\n- [DISTAL: The Distributed Tensor Algebra Compiler](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08069) by Rohan Yadav et al., PLDI 2022\n- [GSPMD: General and Scalable Parallelization for ML Computation Graphs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04663) by Yuanzhong Xu et al., arXiv 2021\n- [Breaking the Computation and Communication Abstraction Barrier in Distributed Machine Learning Workloads](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.05720) by Abhinav Jangda et al., ASPLOS 2022\n- [OneFlow: Redesign the Distributed Deep Learning Framework from Scratch](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.15032) by Jinhui Yuan et al., arXiv 2021\n- [Beyond Data and Model Parallelism for Deep Neural Networks](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2019\u002Fhash\u002Fc74d97b01eae257e44aa9d5bade97baf-Abstract.html) by Zhihao et al., MLSys 2019\n- [Supporting Very Large Models using Automatic Dataflow Graph Partitioning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3302424.3303953) by Minjie Wang et al., EuroSys 2019\n- [Distributed Halide](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3016078.2851157) by Tyler Denniston et al., PPoPP 2016\n\n### Quantization\n- [Automated Backend-Aware Post-Training Quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.14949) by Ziheng Jiang et al., arXiv 2021\n- [Efficient Execution of Quantized Deep Learning Models: A Compiler Approach](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10226) by Animesh Jain et al., arXiv 2020\n- [Automatic Generation of High-Performance Quantized Machine Learning Kernels](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3368826.3377912) by Meghan Cowan et al., CGO 2020\n\n### Sparse\n- [The Sparse Abstract Machine](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14610) by Olivia Hsu et al., ASPLOS 2023\n- [SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.04606) by Zihao Ye et al., ASPLOS 2023\n- [WACO: Learning Workload-Aware Co-optimization of the Format and Schedule of a Sparse Tensor Program](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3575693.3575742) by Jaeyeon Won et al., ASPLOS 2023\n- [Looplets: A Language For Structured Coiteration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.05250) by Willow Ahrens et al., CGO 2023\n- [Code Synthesis for Sparse Tensor Format Conversion and Optimization](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F367180198_Code_Synthesis_for_Sparse_Tensor_Format_Conversion_and_Optimization) by Tobi Popoola et al., CGO 2023\n- [Stardust: Compiling Sparse Tensor Algebra to a Reconfigurable Dataflow Architecture](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.03251) by Olivia Hsu et al., arXiv 2022\n- [The Sparse Abstract Machine](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14610) by Olivia Hsu et al., arXiv 2022\n- [Unified Compilation for Lossless Compression and Sparse Computing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1109\u002FCGO53902.2022.9741282) by Daniel Donenfeld et al., CGO 2022\n- [SparseLNR: Accelerating Sparse Tensor Computations Using Loop Nest Restructuring](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3524059.3532386) by Adhitha Dias et al., ICS 2022\n- [SparTA: Deep-Learning Model Sparsity via Tensor-with-Sparsity-Attribute](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi22\u002Fpresentation\u002Fzheng-ningxin) by Ningxin Zheng et al., OSDI 2022\n- [Compiler Support for Sparse Tensor Computations in MLIR](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04305) by Aart J.C. Bik et al., TACO 2022\n- [Compilation of Sparse Array Programming Models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3485505) by Rawn Henry and Olivia Hsu et al., OOPSLA 2021\n- [A High Performance Sparse Tensor Algebra Compiler in MLIR](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9651314) by Ruiqin Tian et al., LLVM-HPC 2021\n- [Dynamic Sparse Tensor Algebra Compilation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.01394) by Stephen Chou et al., arXiv 2021\n- [Automatic Generation of Efficient Sparse Tensor Format Conversion Routines](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3385412.3385963) by Stephen Chou et al., PLDI 2020\n- [TIRAMISU: A Polyhedral Compiler for Dense and Sparse Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04091) by Riyadh Baghdadi et al., arXiv 2020\n- [Tensor Algebra Compilation with Workspaces](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F3314872.3314894) by Fredrik Kjolstad et al., CGO 2019\n- [Sparse Computation Data Dependence Simplification for Efficient Compiler-Generated Inspectors](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3314221.3314646) by Mahdi Soltan Mohammadi et al., PLDI 2019\n- [Taichi: A Language for High-Performance Computation on Spatially Sparse Data Structures](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3355089.3356506) by Yuanming Hu et al., ACM ToG 2019\n- [The Sparse Polyhedral Framework: Composing Compiler-Generated Inspector-Executor Code](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8436444) by Michelle Mills Strout et al., Proceedings of the IEEE 2018\n- [Format Abstraction for Sparse Tensor Algebra Compilers](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3276493) by Stephen Chou et al., OOPSLA 2018\n- [ParSy: Inspection and Transformation of Sparse Matrix Computations for Parallelism](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8665791) by Kazem Cheshmi et al., SC 2018\n- [Sympiler: Transforming Sparse Matrix Codes by Decoupling Symbolic Analysis](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3126908.3126936) by Kazem Cheshmi et al., SC 2017\n- [The Tensor Algebra Compiler](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3133901) by Fredrik Kjolstad et al., OOPSLA 2017\n- [Next-generation Generic Programming and its Application to Sparse Matrix Computations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F335231.335240) by Nikolay Mateev et al., ICS 2000\n- [A Framework for Sparse Matrix Code Synthesis from High-level Specifications](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F1592771) by Nawaaz Ahmed et al., SC 2000\n- [Automatic Nonzero Structure Analysis](https:\u002F\u002Fepubs.siam.org\u002Fdoi\u002F10.1137\u002FS009753979529595X) by Aart Bik et al., SIAM Journal on Computing 1999\n- [SIPR: A New Framework for Generating Efficient Code for Sparse Matrix Computations](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F3-540-48319-5_14) by William Pugh et al., LCPC 1998\n- [Automatic Data Structure Selection and Transformation for Sparse Matrix Computations](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F485501) by Aart Bik et al., TPDS 1996\n- [Compilation Techniques for Sparse Matrix Computations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F165939.166023) by Aart Bik et al., ICS 1993\n\n\n### Program Rewriting\n- [Verified tensor-program optimization via high-level scheduling rewrites](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3498717) by Amanda Liu et al., POPL 2022\n- [Pure Tensor Program Rewriting via Access Patterns (Representation Pearl)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.09377) by Gus Smith et al., MAPL 2021\n- [Equality Saturation for Tensor Graph Superoptimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.01332) by Yichen Yang et al., MLSys 2021\n\n### Verification and Testing\n- [NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3575693.3575707) by Jiawei Liu et al., ASPLOS 2023\n- [Coverage-guided tensor compiler fuzzing with joint IR-pass mutation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3527317) by Jiawei Liu et al., OOPSLA 2022\n- [End-to-End Translation Validation for the Halide Language](https:\u002F\u002Fstorage.googleapis.com\u002Fpub-tools-public-publication-data\u002Fpdf\u002F2d03e3ae1106d3a2c950fcdc5eeb2c383eb24372.pdf) by Basile Clément et al., OOPSLA 2022\n- [A comprehensive study of deep learning compiler bugs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3468264.3468591) by Qingchao Shen et al., ESEC\u002FFSE 2021\n- [Verifying and Improving Halide’s Term Rewriting System with Program Synthesis](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3428234) by Julie L. Newcomb et al., OOPSLA 2020\n\n## Tutorials\n- [Machine Learning Compilation](https:\u002F\u002Fmlc.ai\u002Fsummer22\u002F)\n- [Dive into Deep Learning Compiler](https:\u002F\u002Ftvm.d2l.ai\u002F)\n\n## Contribute\nWe encourage all contributions to this repository. Open an [issue](https:\u002F\u002Fgithub.com\u002Fmerrymercy\u002Fawesome-tensor-compilers\u002Fissues) or send a [pull request](https:\u002F\u002Fgithub.com\u002Fmerrymercy\u002Fawesome-tensor-compilers\u002Fpulls).\n\n### Notes on the Link Format\nWe prefer using a link which points to a more informative page instead of a single pdf. For example, for arxiv papers, we prefer https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04799 over https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.04799.pdf. For USENIX papers (OSDI\u002FATC), we prefer https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi18\u002Fpresentation\u002Fchen over https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi18-chen.pdf. For ACM papers (ASPLOS\u002FPLDI\u002FEurosys), we prefer https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3519939.3523446 over https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3519939.3523446.\n","# 优秀的张量编译器\n![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)\n[![维护中](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-YES-green.svg)](https:\u002F\u002Fgithub.com\u002Fmerrymercy\u002Fawesome-tensor-compilers\u002Fgraphs\u002Fcommit-activity)\n\n这是一份关于张量计算和深度学习领域优秀编译器项目及论文的列表。\n\n## 目录\n- [开源项目](#open-source-projects)\n- [论文](#papers)\n  - [综述](#survey)\n  - [编译器与中间表示设计](#compiler-and-ir-design)\n  - [自动调优与自动调度](#auto-tuning-and-auto-scheduling)\n  - [代价模型](#cost-model)\n  - [CPU与GPU优化](#cpu-and-gpu-optimization)\n  - [NPU优化](#npu-optimization)\n  - [图级优化](#graph-level-optimization)\n  - [动态模型](#dynamic-model)\n  - [图神经网络](#graph-neural-networks)\n  - [分布式计算](#distributed-computing)\n  - [量化](#quantization)\n  - [稀疏性](#sparse)\n  - [程序重写](#program-rewriting)\n  - [验证与测试](#verification-and-testing)\n- [教程](#tutorials)\n- [贡献](#contribute)\n\n## 开源项目\n- [TVM: 端到端机器学习编译框架](https:\u002F\u002Ftvm.apache.org\u002F)\n- [MLIR: 多级中间表示](https:\u002F\u002Fmlir.llvm.org\u002F)\n- [XLA: 面向机器学习的优化编译器](https:\u002F\u002Fwww.tensorflow.org\u002Fxla)\n- [Halide: 用于图像和张量快速、可移植计算的语言](https:\u002F\u002Fhalide-lang.org\u002F)\n- [Glow: 面向神经网络硬件加速器的编译器](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fglow)\n- [nnfusion: 灵活高效的深度神经网络编译器](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fnnfusion)\n- [Hummingbird: 将训练好的机器学习模型编译为张量计算](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fhummingbird)\n- [Triton: 用于分块神经网络计算的中间语言及编译器](https:\u002F\u002Fgithub.com\u002Fopenai\u002Ftriton)\n- [AITemplate: 将神经网络渲染为高性能CUDA\u002FHIP C++代码的Python框架](https:\u002F\u002Fgithub.com\u002Ffacebookincubator\u002FAITemplate)\n- [Hidet: 基于编译的深度学习框架](https:\u002F\u002Fgithub.com\u002Fhidet-org\u002Fhidet)\n- [Tiramisu: 用于表达快速且可移植代码的多面体编译器](http:\u002F\u002Ftiramisu-compiler.org\u002F)\n- [TensorComprehensions: 框架无关的高性能机器学习抽象](https:\u002F\u002Ffacebookresearch.github.io\u002FTensorComprehensions\u002F)\n- [PlaidML: 让深度学习在任何地方都能运行的平台](https:\u002F\u002Fgithub.com\u002Fplaidml\u002Fplaidml)\n- [BladeDISC: 面向机器学习工作负载的端到端动态形状编译器](https:\u002F\u002Fgithub.com\u002Falibaba\u002FBladeDISC)\n- [TACO: 张量代数编译器](http:\u002F\u002Ftensor-compiler.org\u002F)\n- [Nebulgym: 易用的AI训练加速库](https:\u002F\u002Fgithub.com\u002Fnebuly-ai\u002Fnebulgym)\n- [Speedster: 自动应用最先进优化技术，以在您的硬件上实现最大推理加速](https:\u002F\u002Fgithub.com\u002Fnebuly-ai\u002Fnebullvm\u002Ftree\u002Fmain\u002Fapps\u002Faccelerate\u002Fspeedster)\n- [NN-512: 生成用于神经网络推理的C99代码的编译器](https:\u002F\u002Fnn-512.com\u002F)\n- [DaCeML: 面向机器学习的数据中心编译器](https:\u002F\u002Fgithub.com\u002Fspcl\u002Fdaceml)\n- [Mirage: 面向张量代数的多级超优化器](https:\u002F\u002Fgithub.com\u002Fmirage-project\u002Fmirage)\n\n## 论文\n\n### 综述\n- [深度学习编译器：全面综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03794) 由Mingzhen Li等人撰写，发表于TPDS 2020年\n- [硬件上深度神经网络编译器的深入比较](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8782480) 由Yu Xing等人撰写，发表于ICESS 2019年\n\n### 编译器与中间表示设计\n- [(De\u002FRe)-Composition of Data-Parallel Computations via Multi-Dimensional Homomorphisms]（通过多维同态进行数据并行计算的分解与重构）作者：Ari Rasch，TOPLAS 2024\n- [BladeDISC：基于编译器方法优化动态形状机器学习工作负载] 作者：郑振等，SIGMOD 2024\n- [Hidet：面向深度学习张量程序的任务映射编程范式] 作者：丁瑶瑶等，ASPLOS 2023\n- [TensorIR：用于自动张量化程序优化的抽象] 作者：冯思源、侯博涵等，ASPLOS 2023\n- [针对硬件加速器高效编程的外编译] 作者：Yuka Ikarashi、Gilbert Louis Bernstein等，PLDI 2022\n- [DaCeML：面向机器学习的数据中心编译器] 作者：Oliver Rausch等，ICS 2022\n- [FreeTensor：具有整体优化能力的自由形式 DSL，用于处理不规则张量程序] 作者：唐士志等，PLDI 2022\n- [Roller：面向深度学习的快速高效张量编译] 作者：朱洪宇等，OSDI 2022\n- [AStitch：在现代 SIMT 架构上为内存密集型 ML 训练与推理开启新的多维优化空间] 作者：郑振等，ASPLOS 2022\n- [MLIR 中的可组合、模块化代码生成：一种结构化且可重定向的张量编译器构建方法] 作者：Nicolas Vasilache等，arXiv 2022\n- [PET：利用部分等价变换与自动化修正优化张量程序] 作者：王浩杰等，OSDI 2021\n- [MLIR：面向领域特定计算的规模化编译基础设施] 作者：Chris Lattner等，CGO 2021\n- [用于统一机器学习预测服务的张量编译器] 作者：Supun Nakandala等，OSDI 2020\n- [Rammer：借助 rTasks 实现深度学习编译器的整体优化] 作者：Ma Lingxiao等，OSDI 2020\n- [有状态数据流多图：面向异构架构性能可移植性的数据中心模型] 作者：Tal Ben-Nun等，SC 2019\n- [TASO：深度学习用张量代数超级优化器] 作者：贾志豪等，SOSP 2019\n- [Tiramisu：用于表达快速且可移植代码的多面体编译器] 作者：Riyadh Baghdadi等，CGO 2019\n- [Triton：面向分块神经网络计算的中间语言与编译器] 作者：Philippe Tillet等，MAPL 2019\n- [Relay：深度学习用高级编译器] 作者：Jared Roesch等，arXiv 2019\n- [TVM：面向深度学习的自动化端到端优化编译器] 作者：陈天奇等，OSDI 2018\n- [Tensor Comprehensions：框架无关的高性能机器学习抽象] 作者：Nicolas Vasilache等，arXiv 2018\n- [Intel nGraph：深度学习用中间表示、编译器和执行器] 作者：Scott Cyphers等，arXiv 2018\n- [Glow：面向神经网络的图降级编译技术] 作者：Nadav Rotem等，arXiv 2018\n- [DLVM：面向深度学习系统的现代编译基础设施] 作者：Richard Wei等，arXiv 2018\n- [Diesel：用于 GPU 上线性代数及神经网络计算的 DSL] 作者：Venmugil Elango等，MAPL 2018\n- [张量代数编译器] 作者：Fredrik Kjolstad等，OOPSLA 2017\n- [Halide：用于优化图像处理流水线中并行性、局部性和重新计算的语言与编译器] 作者：Jonathan Ragan-Kelley等，PLDI 2013\n\n### 自动调优与自动调度\n- [面向张量计算的 GPU 内核加速自动调优](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3650200.3656626)，作者：Chendi Li、Yufan Xu 等，ICS 2024\n- [利用张量语言模型辅助生成深度学习高性能张量程序](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi24\u002Fpresentation\u002Fzhai)，作者：Yi Zhai 等，OSDI 2024\n- [用于内核调度的液滴搜索算法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3650109)，作者：Michael Canesche 等，ACM TACO 2024\n- [基于概率编程的张量程序优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13603)，作者：Junru Shao 等，NeurIPS 2022\n- [面向深度学习编译器的一次性调优器](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3497776.3517774)，作者：Jaehun Ryu 等，CC 2022\n- [基于渐近成本模型的稀疏张量代数自动调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3519939.3523442)，作者：Peter Ahrens 等，PLDI 2022\n- [Bolt：弥合自动调优器与硬件原生性能之间的差距](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2022\u002Fhash\u002F38b3eff8baf56627478ec76a704e9b52-Abstract.html)，作者：Jiarong Xing 等，MLSys 2022\n- [面向领域优化深度学习加速器的全栈搜索技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3503222.3507767)，作者：Dan Zhang 等，ASPLOS 2022\n- [面向 GPU 的成像与视觉流水线高效自动调度](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3485486)，作者：Luke Anderson 等，OOPSLA 2021\n- [Lorien：高效交付深度学习工作负载](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3472883.3486973)，作者：Cody Hao Yu 等，SoCC 2021\n- [用于深度神经网络吞吐量优化的价值学习](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Fhash\u002F73278a4a86960eeb576a8fd4c9ec6997-Abstract.html)，作者：Benoit Steiner 等，MLSys 2021\n- [多遍机器学习编译器自动调优的灵活方法](https:\u002F\u002Fmangpo.net\u002Fpapers\u002Fxla-autotuning-pact2021.pdf)，作者：Phitchaya Mangpo Phothilimthana 等，PACT 2021\n- [Ansor：为深度学习生成高性能张量程序](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06762)，作者：Lianmin Zheng 等，OSDI 2020\n- [针对 GPU 上 Halide 流水线的调度合成](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3406117)，作者：Sioutas Savvas 等，TACO 2020\n- [FlexTensor：面向异构系统张量计算的自动调度探索与优化框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3373376.3378508)，作者：Size Zheng 等，ASPLOS 2020\n- [ProTuner：使用蒙特卡洛树搜索调优程序](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.13685)，作者：Ameer Haj-Ali 等，arXiv 2020\n- [AdaTune：高效实现自适应张量程序编译](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fuploads\u002Fprod\u002F2020\u002F10\u002Fnips20adatune.pdf)，作者：Menghao Li 等，NeurIPS 2020\n- [通过组合计算与数据的自动变换优化内存层次结构](https:\u002F\u002Fwww.microarch.org\u002Fmicro53\u002Fpapers\u002F738300a427.pdf)，作者：Jie Zhao 等，MICRO 2020\n- [Chameleon：用于加速深度神经网络编译的自适应代码优化](https:\u002F\u002Fopenreview.net\u002Fforum?id=rygG4AVFvH)，作者：Byung Hoon Ahn 等，ICLR 2020\n- [面向稀疏张量代数的稀疏迭代空间变换框架](http:\u002F\u002Ftensor-compiler.org\u002Fsenanayake-oopsla20-taco-scheduling.pdf)，作者：Ryan Senanayake 等，OOPSLA 2020\n- [使用树搜索和随机程序学习优化 Halide](https:\u002F\u002Fhalide-lang.org\u002Fpapers\u002Fautoscheduler2019.html)，作者：Andrew Adams 等，SIGGRAPH 2019\n- [学习优化张量程序](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08166)，作者：Tianqi Chen 等，NeurIPS 2018\n- [自动调度 Halide 图像处理流水线](http:\u002F\u002Fgraphics.cs.cmu.edu\u002Fprojects\u002Fhalidesched\u002F)，作者：Ravi Teja Mullapudi 等，SIGGRAPH 2016\n\n### 成本模型\n- [TLP：用于张量程序调优的基于深度学习的成本模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.03578)，作者：Yi Zhai 等，ASPLOS 2023\n- [面向稀疏张量程序自动调度的渐近成本模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.14947)，作者：Peter Ahrens 等，PLDI 2022\n- [TenSet：用于学习型张量编译器的大规模程序性能数据集](https:\u002F\u002Fdatasets-benchmarks-proceedings.neurips.cc\u002Fpaper\u002F2021\u002Fhash\u002Fa684eceee76fc522773286a895bc8436-Abstract-round1.html)，作者：Lianmin Zheng 等，NeurIPS 2021\n- [用于自动代码优化的基于深度学习的成本模型](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Fhash\u002F3def184ad8f4755ff269862ea77393dd-Abstract.html)，作者：Riyadh Baghdadi 等，MLSys 2021\n- [面向张量处理单元的学习型性能模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.01040)，作者：Samuel J. Kaufman 等，MLSys 2021\n- [DYNATUNE：深度神经网络编译中的动态张量程序优化](https:\u002F\u002Fopenreview.net\u002Fforum?id=GTGb3M_KcUl)，作者：Minjia Zhang 等，ICLR 2021\n- [MetaTune：基于元学习的成本模型，用于快速高效的自动调优框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.04199)，作者：Jaehun Ryu 等，arXiv 2021\n- [基于 LightGBM 的加速张量程序编译](https:\u002F\u002Fiopscience.iop.org\u002Farticle\u002F10.1088\u002F1742-6596\u002F2078\u002F1\u002F012019)，作者：Gonghan Liu1 等，JPCS 2021\n\n### CPU 和 GPU 优化\n- [DeepCuts：适用于多种 GPU 工作负载的深度学习优化框架](https:\u002F\u002Fpldi21.sigplan.org\u002Fdetails\u002Fpldi-2021-papers\u002F13\u002FDeepCuts-A-Deep-Learning-Optimization-Framework-for-Versatile-GPU-Workloads)，作者：Wookeun Jung 等，PLDI 2021\n- [CNN 优化的解析表征与设计空间探索](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3445814.3446759)，作者：Rui Li 等，ASPLOS 2021\n- [UNIT：统一张量化指令编译](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9370330)，作者：Jian Weng 等，CGO 2021\n- [PolyDL：用于创建高性能 DL 原语的多面体优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.02230)，作者：Sanket Tavarageri 等，arXiv 2020\n- [Fireiron：面向 GPU 的数据移动感知调度语言](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3410463.3414632)，作者：Bastian Hagedorn 等，PACT 2020\n- [Volta 张量核心的自动内核生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12645)，作者：Somashekaracharya G. Bhaskaracharya 等，arXiv 2020\n- [Swizzle Inventor：GPU 内核的数据移动合成](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3297858.3304059)，作者：Phitchaya Mangpo Phothilimthana 等，ASPLOS 2019\n- [在 CPU 上优化 CNN 模型推理](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fatc19\u002Fpresentation\u002Fliu-yizhi)，作者：Yizhi Liu 等，ATC 2019\n- [张量收缩的缓存解析建模与分块大小优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3295500.3356218)，作者：Rui Li 等，SC 19\n\n### NPU优化\n- [Heron：面向深度学习加速器的自动约束高性能库生成]（Jun Bi 等人，ASPLOS 2023）\n- [AMOS：通过硬件抽象实现空间加速器上张量计算的自动映射]（Size Zheng 等人，ISCA 2022）\n- [迈向神经网络与加速器的协同设计]（Yanqi Zhou 等人，MLSys 2022）\n- [AKG：基于多面体变换的神经处理单元自动内核生成]（Jie Zhao 等人，PLDI 2021）\n\n### 图级优化\n- [POET：在小型设备上训练神经网络，集成重计算与分页技术]（Shishir G. Patil 等人，ICML 2022）\n- [Collage：深度学习后端的无缝集成与自动放置]（Byungsoo Jeon 等人，PACT 2022）\n- [Apollo：通过逐层优化实现基于分区的算子融合]（Jie Zhao 等人，MLSys 2022）\n- [张量图超优化中的等式饱和]（Yichen Yang 等人，MLSys 2021）\n- [IOS：用于 CNN 加速的算子间调度器]（Yaoyao Ding 等人，MLSys 2021）\n- [利用图替换优化 DNN 计算图]（Jingzhi Fang 等人，VLDB 2020）\n- [适用于 ML 编译器的可迁移图优化器]（Yanqi Zhou 等人，NeurIPS 2020）\n- [FusionStitching：提升深度学习工作负载中的内存密集型计算性能]（Zhen Zheng 等人，arXiv 2020）\n- [Nimble：面向深度学习的轻量级并行 GPU 任务调度]（Woosuk Kwon 等人，Neurips 2020）\n\n### 动态模型\n- [Axon：一种用于深度学习图中动态形状的语言]（Alexander Collins 等人，arXiv 2022）\n- [DietCode：动态张量程序的自动优化]（Bojian Zheng 等人，MLSys 2022）\n- [The CoRa 张量编译器：针对稀疏张量的零填充最小化编译]（Pratik Fegade 等人，MLSys 2022）\n- [Nimble：高效编译动态神经网络以支持模型推理]（Haichen Shen 等人，MLSys 2021）\n- [DISC：面向机器学习工作loads的动态形状编译器]（Kai Zhu 等人，EuroMLSys 2021）\n- [Cortex：递归深度学习模型的编译器]（Pratik Fegade 等人，MLSys 2021）\n\n### 图神经网络\n- [Graphiler：利用消息传递数据流图优化图神经网络]（Zhiqiang Xie 等人，MLSys 2022）\n- [Seastar：面向图神经网络的顶点中心编程]（Yidi Wu 等人，Eurosys 2021）\n- [FeatGraph：灵活高效的图神经网络系统后端]（Yuwei Hu 等人，SC 2020）\n\n### 分布式计算\n- [SpDISTAL：编译分布式稀疏张量计算]（Rohan Yadav 等人，SC 2022）\n- [Alpa：自动化分布式深度学习中的算子间及算子内并行化]（Lianmin Zheng、Zhuohan Li、Hao Zhang 等人，OSDI 2022）\n- [Unity：通过代数变换与并行化的联合优化加速 DNN 训练]（Colin Unger、Zhihao Jia 等人，OSDI 2022）\n- [在层次化系统上为深度学习合成最优并行化布局与规约策略]（Ningning Xie、Tamara Norman、Diminik Grewe、Dimitrios Vytiniotis 等人，MLSys 2022）\n- [DISTAL：分布式张量代数编译器]（Rohan Yadav 等人，PLDI 2022）\n- [GSPMD：面向 ML 计算图的一般化可扩展并行化]（Yuanzhong Xu 等人，arXiv 2021）\n- [打破分布式机器学习工作loads中的计算与通信抽象壁垒]（Abhinav Jangda 等人，ASPLOS 2022）\n- [OneFlow：从头重新设计分布式深度学习框架]（Jinhui Yuan 等人，arXiv 2021）\n- [超越深度神经网络的数据与模型并行化]（Zhihao 等人，MLSys 2019）\n- [利用自动数据流图划分支持超大规模模型]（Minjie Wang 等人，EuroSys 2019）\n- [分布式 Halide]（Tyler Denniston 等人，PPoPP 2016）\n\n### 量化\n- [自动化后端感知的训练后量化]（Ziheng Jiang 等人，arXiv 2021）\n- [量化深度学习模型的高效执行：一种编译器方法]（Animesh Jain 等人，arXiv 2020）\n- [高性能量化机器学习内核的自动生成]（Meghan Cowan 等人，CGO 2020）\n\n### 稀疏\n- [稀疏抽象机](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14610) 由 Olivia Hsu 等人撰写，ASPLOS 2023\n- [SparseTIR：用于深度学习中稀疏编译的可组合抽象](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.04606) 由 Zihao Ye 等人撰写，ASPLOS 2023\n- [WACO：学习工作负载感知的稀疏张量程序格式与调度协同优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3575693.3575742) 由 Jaeyeon Won 等人撰写，ASPLOS 2023\n- [Looplets：一种用于结构化协同迭代的语言](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.05250) 由 Willow Ahrens 等人撰写，CGO 2023\n- [稀疏张量格式转换与优化的代码合成](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F367180198_Code_Synthesis_for_Sparse_Tensor_Format_Conversion_and_Optimization) 由 Tobi Popoola 等人撰写，CGO 2023\n- [Stardust：将稀疏张量代数编译为可重构数据流架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.03251) 由 Olivia Hsu 等人撰写，arXiv 2022\n- [稀疏抽象机](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14610) 由 Olivia Hsu 等人撰写，arXiv 2022\n- [无损压缩与稀疏计算的统一编译](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1109\u002FCGO53902.2022.9741282) 由 Daniel Donenfeld 等人撰写，CGO 2022\n- [SparseLNR：利用循环嵌套重排加速稀疏张量计算](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3524059.3532386) 由 Adhitha Dias 等人撰写，ICS 2022\n- [SparTA：通过带有稀疏属性的张量实现深度学习模型稀疏化](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi22\u002Fpresentation\u002Fzheng-ningxin) 由 Ningxin Zheng 等人撰写，OSDI 2022\n- [MLIR 中对稀疏张量计算的编译器支持](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.04305) 由 Aart J.C. Bik 等人撰写，TACO 2022\n- [稀疏数组编程模型的编译](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3485505) 由 Rawn Henry 和 Olivia Hsu 等人撰写，OOPSLA 2021\n- [MLIR 中的高性能稀疏张量代数编译器](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9651314) 由 Ruiqin Tian 等人撰写，LLVM-HPC 2021\n- [动态稀疏张量代数编译](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.01394) 由 Stephen Chou 等人撰写，arXiv 2021\n- [高效稀疏张量格式转换例程的自动生成](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3385412.3385963) 由 Stephen Chou 等人撰写，PLDI 2020\n- [TIRAMISU：面向稠密与稀疏深度学习的多面体编译器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04091) 由 Riyadh Baghdadi 等人撰写，arXiv 2020\n- [带工作区的张量代数编译](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.5555\u002F3314872.3314894) 由 Fredrik Kjolstad 等人撰写，CGO 2019\n- [稀疏计算数据依赖关系简化以生成高效的编译器检查器](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3314221.3314646) 由 Mahdi Soltan Mohammadi 等人撰写，PLDI 2019\n- [Taichi：一种用于空间稀疏数据结构上高性能计算的语言](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3355089.3356506) 由 Yuanming Hu 等人撰写，ACM ToG 2019\n- [稀疏多面体框架：编译器生成的检查器-执行器代码组合](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8436444) 由 Michelle Mills Strout 等人撰写，IEEE 2018 年会议论文集\n- [稀疏张量代数编译器的格式抽象](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3276493) 由 Stephen Chou 等人撰写，OOPSLA 2018\n- [ParSy：针对并行化的稀疏矩阵计算检查与变换](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8665791) 由 Kazem Cheshmi 等人撰写，SC 2018\n- [Sympiler：通过解耦符号分析来转换稀疏矩阵代码](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3126908.3126936) 由 Kazem Cheshmi 等人撰写，SC 2017\n- [张量代数编译器](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3133901) 由 Fredrik Kjolstad 等人撰写，OOPSLA 2017\n- [下一代泛型编程及其在稀疏矩阵计算中的应用](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F335231.335240) 由 Nikolay Mateev 等人撰写，ICS 2000\n- [基于高级规格的稀疏矩阵代码合成框架](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F1592771) 由 Nawaaz Ahmed 等人撰写，SC 2000\n- [自动非零元素结构分析](https:\u002F\u002Fepubs.siam.org\u002Fdoi\u002F10.1137\u002FS009753979529595X) 由 Aart Bik 等人撰写，SIAM 计算期刊 1999\n- [SIPR：生成稀疏矩阵计算高效代码的新框架](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F3-540-48319-5_14) 由 William Pugh 等人撰写，LCPC 1998\n- [稀疏矩阵计算的自动数据结构选择与转换](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F485501) 由 Aart Bik 等人撰写，TPDS 1996\n- [稀疏矩阵计算的编译技术](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F165939.166023) 由 Aart Bik 等人撰写，ICS 1993\n\n\n### 程序重写\n- [通过高级调度重写进行验证的张量程序优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3498717) 由 Amanda Liu 等人撰写，POPL 2022\n- [基于访问模式的纯张量程序重写（表示珍珠）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.09377) 由 Gus Smith 等人撰写，MAPL 2021\n- [等式饱和用于张量图超优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.01332) 由 Yichen Yang 等人撰写，MLSys 2021\n\n### 验证与测试\n- [NNSmith：为深度学习编译器生成多样且有效的测试用例](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3575693.3575707) 由 Jiawei Liu 等人撰写，ASPLOS 2023\n- [基于覆盖率的张量编译器模糊测试，结合 IR 传递突变](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3527317) 由 Jiawei Liu 等人撰写，OOPSLA 2022\n- [Halide 语言的端到端翻译验证](https:\u002F\u002Fstorage.googleapis.com\u002Fpub-tools-public-publication-data\u002Fpdf\u002F2d03e3ae1106d3a2c950fcdc5eeb2c383eb24372.pdf) 由 Basile Clément 等人撰写，OOPSLA 2022\n- [深度学习编译器缺陷的全面研究](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3468264.3468591) 由 Qingchao Shen 等人撰写，ESEC\u002FFSE 2021\n- [利用程序合成验证并改进 Halide 的项重写系统](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3428234) 由 Julie L. Newcomb 等人撰写，OOPSLA 2020\n\n## 教程\n- [机器学习编译](https:\u002F\u002Fmlc.ai\u002Fsummer22\u002F)\n- [深入深度学习编译器](https:\u002F\u002Ftvm.d2l.ai\u002F)\n\n## 贡献\n我们鼓励大家为本仓库做出贡献。请打开一个[问题](https:\u002F\u002Fgithub.com\u002Fmerrymercy\u002Fawesome-tensor-compilers\u002Fissues)或发送一个[拉取请求](https:\u002F\u002Fgithub.com\u002Fmerrymercy\u002Fawesome-tensor-compilers\u002Fpulls)。\n\n### 关于链接格式的说明\n我们更倾向于使用指向信息更丰富页面的链接，而不是单独的 PDF 文件。例如，对于 arXiv 论文，我们更喜欢 https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04799，而不是 https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.04799.pdf。对于 USENIX 论文（OSDI\u002FATC），我们更倾向于 https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fosdi18\u002Fpresentation\u002Fchen，而非 https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi18-chen.pdf。对于 ACM 论文（ASPLOS\u002FPLDI\u002FEurosys），我们更偏好 https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3519939.3523446，而不是 https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3519939.3523446。","# Awesome Tensor Compilers 快速上手指南\n\n`awesome-tensor-compilers` 并非单一的可安装软件，而是一个精选的**张量编译器与深度学习编译技术资源列表**。它汇集了开源项目、学术论文和教程。\n\n要开始使用其中的技术，您需要根据具体需求选择列表中的某个编译器项目（如 TVM, MLIR, Triton 等）进行安装和使用。本指南将以生态最成熟、应用最广泛的 **Apache TVM** 为例，演示如何从零开始体验张量编译技术。其他项目（如 MLIR, XLA）的安装逻辑类似，请参考其各自官方文档。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下基本要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04\u002F22.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python**: 版本 3.8 或更高。\n*   **构建工具**: `gcc`, `g++`, `cmake`, `git`, `llvm-dev` (LLVM 是多数编译器的后端依赖)。\n*   **深度学习框架**: 已安装 PyTorch 或 TensorFlow (用于模型导入)。\n\n**前置依赖安装命令 (Ubuntu 示例):**\n\n```bash\nsudo apt-get update\nsudo apt-get install -y python3 python3-pip python3-setuptools gcc g++ cmake git llvm-dev libedit-dev libssl-dev zlib1g-dev libboost-all-dev libncurses5-dev libsqlite3-dev\n```\n\n## 安装步骤\n\n推荐使用 Python pip 直接安装预编译包，这是最快的方式。如果需要针对特定硬件（如特定的 GPU 或 NPU）进行深度优化，则需从源码编译（此处仅展示快速安装）。\n\n**1. 安装 Apache TVM:**\n\n```bash\npip install apache-tvm==1.0.0\n```\n*(注：版本号可根据实际需求调整，建议查看 PyPI 获取最新版)*\n\n**2. 验证安装:**\n\n```bash\npython3 -c \"import tvm; print(tvm.__version__)\"\n```\n若输出版本号且无报错，则安装成功。\n\n## 基本使用\n\n以下示例展示如何使用 TVM 将一个简单的 PyTorch 模型编译并优化为可在 CPU 上高效运行的张量程序。\n\n**1. 创建测试脚本 (`quick_start.py`):**\n\n```python\nimport torch\nimport tvm\nfrom tvm import relay\nfrom tvm.contrib.download import download_testdata\nfrom tvm.relay.backend import Runtime\n\n# 1. 定义一个简单的 PyTorch 模型\nclass SimpleNet(torch.nn.Module):\n    def __init__(self):\n        super(SimpleNet, self).__init__()\n        self.linear = torch.nn.Linear(10, 5)\n\n    def forward(self, x):\n        return self.linear(x)\n\nmodel = SimpleNet()\nmodel.eval()\n\n# 2. 创建虚拟输入\ninput_shape = (1, 10)\ninput_data = torch.randn(input_shape)\n\n# 3. 将 PyTorch 模型转换为 Relay IR (TVM 的中间表示)\nscripted_model = torch.jit.trace(model, input_data).eval()\nmod, params = relay.frontend.from_pytorch(scripted_model, [(\"input0\", input_shape)])\n\n# 4. 配置编译目标 (以 LLVM\u002FCPU 为例)\ntarget = tvm.target.Target(\"llvm\")\ndev = tvm.cpu(0)\n\n# 5. 编译模型\nwith tvm.transform.PassContext(opt_level=3):\n    lib = relay.build(mod, target=target, params=params)\n\n# 6. 创建运行时并执行\ntvm_model = tvm.contrib.graph_executor.GraphModule(lib[\"default\"](dev))\ntvm_model.set_input(\"input0\", tvm.nd.array(input_data.numpy()))\ntvm_model.run()\noutput_tvm = tvm_model.get_output(0).numpy()\n\n# 7. 验证结果 (对比 PyTorch 原生输出)\noutput_pytorch = model(input_data).detach().numpy()\nprint(\"TVM Output shape:\", output_tvm.shape)\nprint(\"Results match:\", torch.allclose(torch.tensor(output_tvm), torch.tensor(output_pytorch), atol=1e-5))\n```\n\n**2. 运行脚本:**\n\n```bash\npython3 quick_start.py\n```\n\n**预期输出:**\n如果配置正确，脚本将输出模型的形状以及 `Results match: True`，表明张量编译器已成功接管并执行了计算任务。\n\n---\n*提示：若要探索列表中其他项目（如 `mlir`, `triton`, `hidet`），请访问 `awesome-tensor-compilers` 仓库中对应的 \"Open Source Projects\" 链接获取其专属安装指南。*","某自动驾驶初创公司的算法团队正致力于将新研发的动态形状感知模型部署到边缘计算设备上，以满足实时路况分析的低延迟需求。\n\n### 没有 awesome-tensor-compilers 时\n- **选型迷茫**：面对 TVM、MLIR、XLA 等数十个编译器项目，团队花费数周调研却难以确定哪个最支持动态输入尺寸，严重拖慢项目进度。\n- **性能瓶颈**：盲目使用默认后端导致推理延迟高达 200ms，无法满足车载系统 50ms 的硬性指标，且缺乏针对特定 NPU 的优化方案指引。\n- **重复造轮子**：由于不了解业界已有的自动调优（Auto-tuning）和量化论文，工程师手动编写底层 CUDA 内核，不仅效率低下且极易出错。\n- **生态割裂**：在尝试混合使用不同框架时，因缺乏对中间表示（IR）设计的统一认知，导致模型转换频繁报错，调试成本极高。\n\n### 使用 awesome-tensor-compilers 后\n- **精准决策**：通过查阅分类清晰的开源项目列表，团队迅速锁定支持动态形状的 BladeDISC 和 Hidet，将技术选型时间从数周缩短至 2 天。\n- **极致性能**：参考列表中关于 NPU 优化和自动调度的前沿论文，团队应用了 SOTA 优化策略，成功将推理延迟降低至 35ms，超越预期目标。\n- **站在巨人肩上**：直接复用列表中推荐的成熟工具链（如 Triton 或 Speedster），避免了底层算子的重复开发，让团队专注于核心算法迭代。\n- **路径清晰**：借助详细的教程和综述文章，团队快速构建了统一的编译优化流程，解决了多框架协作难题，显著提升了工程稳定性。\n\nawesome-tensor-compilers 如同深度学习编译器领域的“导航图”，帮助开发者在复杂的技术迷宫中快速找到最优路径，将原本漫长的探索期转化为立竿见影的性能提升。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmerrymercy_awesome-tensor-compilers_ad94d844.png","merrymercy","Lianmin Zheng","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmerrymercy_2a7f5d33.jpg","Engineer","xAI","Bay Arena, CA","lianminzheng@gmail.com","lm_zheng","http:\u002F\u002Flmzheng.net","https:\u002F\u002Fgithub.com\u002Fmerrymercy",null,2735,324,"2026-04-10T18:08:30",5,"","未说明",{"notes":90,"python":88,"dependencies":91},"该仓库（awesome-tensor-compilers）是一个 curated list（精选列表），主要收集了张量编译器相关的开源项目、学术论文和教程链接，其本身不是一个可执行的软件工具，因此 README 中未包含具体的运行环境需求（如操作系统、GPU、内存、Python 版本或依赖库）。用户若需使用列表中提到的具体工具（如 TVM, MLIR, Triton 等），需分别查阅各项目的独立文档以获取相应的安装和运行要求。",[],[35,14],[94,95,96,97,98,99,100],"compiler","tensor","deep-learning","machine-learning","high-performance-computing","code-generation","programming-language","2026-03-27T02:49:30.150509","2026-04-14T00:02:54.136815",[],[]]