[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zwang4--awesome-machine-learning-in-compilers":3,"tool-zwang4--awesome-machine-learning-in-compilers":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":80,"stars":83,"forks":84,"last_commit_at":85,"license":86,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":94,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":109},3296,"zwang4\u002Fawesome-machine-learning-in-compilers","awesome-machine-learning-in-compilers","Must read research papers and links to tools and datasets that are related to using machine learning for compilers and systems optimisation","awesome-machine-learning-in-compilers 是一个精心整理的开源资源库，专注于汇聚机器学习在编译器与系统优化领域的前沿成果。它解决了该交叉学科研究资料分散、入门门槛高的问题，将海量的学术论文、数据集、软件工具及行业会议信息进行了系统化分类与整合。\n\n这份清单涵盖了从基础的综述文章到具体的迭代编译、指令级优化、并行任务调度，再到如今热门的大语言模型（LLM）与编译器协同等细分方向。无论是希望快速了解领域全貌的初学者，还是正在寻找特定算法实现或基准测试数据的研究人员与编译器开发者，都能在这里高效定位所需资源。\n\n其独特亮点在于不仅收录了经典的传统机器学习应用案例，还持续更新包括“深度学习编译器”及\"LLM 与新编译器栈”在内的最新技术趋势，确保了内容的前瞻性与实用性。对于致力于提升代码执行效率、探索自动调优技术或从事相关学术研究的专业人士而言，awesome-machine-learning-in-compilers 是一份不可或缺的权威导航指南，帮助大家站在巨人的肩膀上推动系统性能优化的边界。","# Awesome machine learning for compilers and program optimisation \n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n[![Maintenance](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-YES-green.svg)](https:\u002F\u002Fgithub.com\u002Fzwang4\u002Fawesome-machine-learning-in-compilers\u002Fgraphs\u002Fcommit-activity)\n\nA curated list of awesome research papers, datasets, and tools for applying machine learning techniques to compilers and program optimisation.\n\n\n## Contents\n- [Papers](#papers)\n   - [Survey](#survey)\n   - [Iterative Compilation and Compiler Option Tuning](#iterative-compilation-and-compiler-option-tuning)\n   - [Instruction-level Optimisation](#instruction-level-optimisation)\n   - [Parallelism Mapping and Task Scheduling](#parallelism-mapping-and-task-scheduling)\n   - [Languages and Compilation](#languages-and-compilation)\n   - [Auto-tuning and Design Space Exploration](#auto-tuning-and-design-space-exploration)\n   - [Code Size Reduction](#code-size-reduction)\n   - [Cost and Performance Models](#cost-and-performance-models)\n   - [Domain-specific Optimisation](#domain-specific-optimisation)\n   - [Learning Program Representation](#learning-program-representation)\n   - [ML for Compilers and Systems Optimisation](#ml-for-compilers-and-systems-optimisation)\n   - [Memory\u002FCache Modelling\u002FAnalysis](#memorycache-modelinganalysis)\n- [Books](#books)\n- [Talks and Tutorials](#talks-and-tutorials)\n- [Software](#software)\n- [Benchmarks and Datasets](#benchmarks-and-datasets)\n- [Conferences](#conferences)\n- [Journals](#journals)\n- [How to Contribute](#how-to-contribute)\n\n## Papers\n#### Survey\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F23-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [Machine Learning in Compiler Optimisation](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fpieee18.pdf) - Zheng Wang and Michael O'Boyle, Proceedings of the IEEE, 2018\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F43-pages-green.svg\" alt=\"43-pages\" align=\"top\"> [A survey on compiler autotuning using machine learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3197978) - Amir H. Ashouri, William Killian, John Cavazos, Gianluca Palermo, and Cristina Silvano, ACM Computing Surveys (CSUR), 2018\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F43-pages-green.svg\" alt=\"43-pages\" align=\"top\"> [A survey of machine learning for big code and naturalness](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06182) - Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, and Charles Sutton, ACM Computing Surveys (CSUR), 2018\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [A Taxonomy of ML for Systems Problems](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9153088) - Martin Maas, IEEE Micro, 2020\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F34-pages-green.svg\" alt=\"34-pages\" align=\"top\"> [The Deep Learning Compiler: A Comprehensive Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03794) - Mingzhen Li, Yi Liu, Xiaoyan Liu, Qingxiao Sun, Xin You, Hailong Yang, Zhongzhi Luan, Lin Gan, Guangwen Yang, Depei Qian, IEEE Transactions on Parallel and Distributed Systems, 2021\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F33-pages-green.svg\" alt=\"34-pages\" align=\"top\"> [The New Compiler Stack: A Survey on the Synergy of LLMs and Compilers](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs42514-025-00270-x) - Shuoming Zhang, Jiacheng Zhao, Qiuchu Yu, Chunwei Xia, Zheng Wang, Xiaobing Feng, Huimin Cui, CCF Transactions on High Performance Computing, 2026\n\n#### Iterative Compilation and Compiler Option Tuning\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Towards Efficient Compiler Auto-tuning: Leveraging Synergistic Search Spaces](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696443.3708961) -  Haolin Pan, Yuanyu Wei, Mingjie Xing, Yanjun Wu, Chen Zhao. CGO 2025. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [SRTuner: Effective Compiler Optimization\nCustomization by Exposing Synergistic Relations](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9741263) - Sunghyun Park, Salar Latifi, Yongjun Park, Armand Behroozi, Byungsoo Jeon, Scott Mahlke. CGO 2022. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Iterative Compilation Optimization Based on Metric Learning and Collaborative Filtering](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3480250) - Hongzhi Liu, Jie Luo, Ying Li, Zhonghai Wu. ACM TACO 2022. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Bayesian Optimization is Superior to Random Search for\nMachine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.10201.pdf) - Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, Isabelle Guyon. arXiv 2021. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Bliss: auto-tuning complex applications using a pool of diverse lightweight learning models](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3453483.3454109) - RB Roy, T Patel, V Gadepally, D Tiwari. PLDI 2021. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Efficient Compiler Autotuning via Bayesian Optimization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1uc5d6xn3EUYXWVV8VFSdtfZ9eqvTL3k1\u002Fview) - Junjie Chen, Ningxin Xu, Peiqi Chen, Hongyu Zhang. ICSE 2021. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Customized Monte Carlo Tree Search for LLVM\u002FPolly's Composable Loop Optimization Transformations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04555) - Jaehoon Koo, Prasanna Balaprakash, Michael Kruse, Xingfu Wu, Paul Hovland, Mary Hall. Arxiv.org, 2021. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Improved basic block reordering](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.04676.pdf) - Andy Newell and Sergey Pupyrev. IEEE Transactions on Computers, 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Static Neural Compiler Optimization via Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.08951) - Rahim Mammadli, Ali Jannesari, Felix Wolf. LLVM HPC Workshop, 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Autotuning Search Space for Loop Transformations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.04676.pdf) - Michael Kruse, Hal Finkel, Xingfu Wu. LLVM HPC Workshop, 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [A Collaborative Filtering Approach for the Automatic Tuning of Compiler Optimisations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3372799.3394361) - Stefano Cereda, Gianluca Palermo, Paolo Cremonesi, and Stefano Doni, LCTES 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Autophase: Compiler phase-ordering for hls with deep reinforcement learning](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2020\u002Ffile\u002F4e732ced3463d06de0ca9a15b6153677-Paper.pdf). Ameer Haj-Ali, Qijing Huang,  William Moses, John Xiang, Ion Stoica, Krste Asanovic, John Wawrzynek. MLSys 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [FuncyTuner: Auto-tuning Scientific Applications With Per-loop\nCompilation](https:\u002F\u002Farcb.csc.ncsu.edu\u002F~mueller\u002Fftp\u002Fpub\u002Fmueller\u002Fpapers\u002Ficpp19.pdf) - \tTao Wang, Nikhil Jain, David Beckingsale, David Böhme, Frank Mueller, Todd Gamblin. ICPP 2019. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F21-pages-green.svg\" alt=\"21-pages\" align=\"top\"> [Micomp: Mitigating the compiler phase-ordering problem using optimization sub-sequences and machine learning](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F93751619.pdf) - Amir H. Ashouri, Andrea Bignoli, Gianluca Palermo, Cristina Silvano, Sameer Kulkarni, and John Cavazos. ACM Transactions on Architecture and Code Optimization (TACO) 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [Iterative Schedule Optimization for Parallelization in the\nPolyhedron Model](https:\u002F\u002Fwww.infosun.fim.uni-passau.de\u002Fpublications\u002Fdocs\u002FGGS+17.pdf) - Stefan Ganser, Armin Grösslinger, Norbert Siegmund, Sven Apel, and Christian Lengauer. ACM Transactions on Architecture and Code Optimization (TACO), 2017. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Learning to superoptimize programs](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01787v3) - Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H.S. Torr, Pushmeet Kohlim. ICLR 2017\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Continuous learning of compiler heuristics](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F2400682.2400705) - Michele Tartara and Stefano Crespi Reghizzi. ACM Transactions on Architecture and Code Optimization (TACO), 2013. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Mitigating the compiler optimization phase-ordering problem using machine learning](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Foopsla-2012.pdf) - Sameer Kulkarni and John Cavazos. OOPSLA 2012.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [An evaluation of different modeling techniques for iterative compilation](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fcases-2011.pdf) - Eunjung Park, Sameer Kulkarni, and John Cavazos. CASES 2011.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Evaluating iterative optimization across 1000 datasets](https:\u002F\u002Fusers.elis.ugent.be\u002F~leeckhou\u002Fpapers\u002Fpldi10.pdf) - Yang Chen, Yuanjie Huang, Lieven Eeckhout, Grigori Fursin, Liang Peng, Olivier Temam, and Chengyong Wu. PLDI 2010\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Iterative optimization in the polyhedral model: Part II, multidimensional time](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fpldi-2008.pdf) - Louis-Noël Pouchet, Cédric Bastoul, Albert Cohen, and John Cavazos. PLDI 2008.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Cole: compiler optimization level exploration](https:\u002F\u002Fusers.elis.ugent.be\u002F~leeckhou\u002Fpapers\u002Fcgo08.pdf) - Kenneth Hoste and Lieven Eeckhout. CGO 2008.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [MILEPOST GCC: machine learning based research compiler](http:\u002F\u002Fwww.fursin.net\u002Fpapers\u002Ffmtp2008.pdf) - Grigori Fursin, Cupertino Miranda, Olivier Temam, Mircea Namolaru, Elad Yom-Tov, Ayal Zaks, Bilha Mendelson et al., 2008\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Evaluating heuristic optimization phase order search algorithms](http:\u002F\u002Fwww.cs.fsu.edu\u002F~whalley\u002Fpapers\u002Fcgo07.pdf) - J. W. Davidson, Gary S. Tyson, D. B. Whalley, and P. A. Kulkarni. CGO 2007. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Rapidly selecting good compiler optimizations using performance counters](http:\u002F\u002Febonilla.github.io\u002Fpapers\u002Fcavazos-et-al-cgo-2007.pdf) - John Cavazos, Grigori Fursin, Felix Agakov, Edwin Bonilla, Michael FP O'Boyle, and Olivier Temam. CGO 2007. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Using machine learning to focus iterative optimization](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fbfranke\u002FPublications\u002Fcgo-2006.pdf) - Felix Agakov, Edwin Bonilla, John Cavazos, Björn Franke, Grigori Fursin, Michael FP O'Boyle, John Thomson, Marc Toussaint, and Christopher KI Williams. CGO 2006. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Method-specific dynamic compilation using logistic regression](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.132.4370&rep=rep1&type=pdf) - John Cavazos and Michael FP O'boyle. OOPSLA 2005. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Predicting unroll factors using supervised classification](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.93.2788&rep=rep1&type=pdf) - Mark Stephenson and Saman Amarasinghe. CGO 2005. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Fast searches for effective optimization phase sequences](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.93.2788&rep=rep1&type=pdf) - Prasad Kulkarni, Stephen Hines, Jason Hiser, David Whalley, Jack Davidson, and Douglas Jones. PLDI 2004. \n\n\n#### Instruction-level Optimisation\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [VEGA: Automatically Generating Compiler Backends using a Pre-trained Transformer Model](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696443.3708931) - Ming Zhong, Fang Lv, Lulin Wang, Lei Qiu, Yingying Wang, Ying Liu, Huimin Cui, Xiaobing Feng, Jingling Xue. CGO 2025.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [RL4ReAl: Reinforcement Learning for Register Allocation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.02013.pdf) - S. VenkataKeerthy, Siddharth Jain, Anilava Kundu, Rohit Aggarwal, Albert Cohen, Ramakrishna Upadrasta. CC 2023.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Reinforcement Learning assisted Loop Distribution for Locality and Vectorization](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F365475992_Reinforcement_Learning_assisted_Loop_Distribution_for_Locality_and_Vectorization) - Shalini Jain, S. VenkataKeerthy, Rohit Aggarwal, Tharun Kumar Dangeti, Dibyendu Das, Ramakrishna Upadrasta. LLVM HPC Workshop 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Discovering faster matrix multiplication algorithms with reinforcement learning](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41586-022-05172-4.pdf) -  Fawzi, Alhussein, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov et al. Nature 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [A Reinforcement Learning Environment for Polyhedral Optimizations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.13732.pdf) - Alexander Brauckmann, Andrés Goens, Jeronimo Castrillon. PACT, 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [AI Powered Compiler Techniques for DL Code Optimization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.05573.pdf) - Sanket Tavarageri, Gagandeep Goyal, Sasikanth Avancha, Bharat Kaul, Ramakrishna Upadrasta. Arxiv.org, 2021. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [VeGen: A Vectorizer Generator for SIMD and Beyond](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2021\u002Fvegen.pdf) - Yishen Chen, Charith Mendis, Michael Carbin, Saman Amarasinghe. ASPLOS 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Deep Learning-based Hybrid Graph-Coloring Algorithm for Register Allocation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.03700) - Dibyendu Das, Shahid Asghar Ahmad, Kumar Venkataramanan. LLVM HPC Workshop, 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [NeuroVectorizer: end-to-end vectorization with deep reinforcement learning](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~krste\u002Fpapers\u002Fneurovectorizer-cgo2020.pdf) - Ameer Haj-Ali, Nesreen K. Ahmed, Ted Willke, Yakun Sophia Shao, Krste Asanovic, and Ion Stoica. CGO 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Unleashing the Power of Learning: An Enhanced Learning-Based Approach for Dynamic Binary Translation](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fatc19-song_0.pdf) - Changheng Song, Wenwen Wang, Pen-Chung Yew, Antonia Zhai, Weihua Zhang. USENIX ATC 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Compiler Auto-Vectorization with Imitation Learning](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9604-compiler-auto-vectorization-with-imitation-learning.pdf) - Charith Mendis, Cambridge Yang, Yewen Pu, Saman P. Amarasinghe, Michael Carbin. NeurIPS 2019. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F19-pages-green.svg\" alt=\"19-pages\" align=\"top\"> [Multi-objective Exploration for Practical Optimization Decisions in Binary Translation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3358185) - Sunghyun Park, Youfeng Wu, Janghaeng Lee, Amir Aupov, and Scott Mahlke. ACM Transactions on Embedded Computing Systems (TECS), 2019. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Automatic construction of inlining heuristics using machine learning.](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1109\u002FCGO.2013.6495004) - Sameer Kulkarni, John Cavazos, Christian Wimmer, and Douglas Simon. CGO 2013. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Automatic tuning of inlining heuristics](http:\u002F\u002Fsc05.supercomputing.org\u002Fschedule\u002Fpdf\u002Fpap274.pdf) - John Cavazos and Michael O'Boyle. SC 2005. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Inducing heuristics to decide whether to schedule](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fpldi-2004.pdf) - John Cavazos and J. Eliot B. Moss. PLDI 2003. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Meta optimization: Improving compiler heuristics with machine learning](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcag\u002Fmetaopt\u002Fpapers\u002Fmetaopt-pldi03.pdf) - Mark Stephenson, Saman Amarasinghe, Martin Martin, and Una-May O'Reilly. PLDI 2003. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F7-pages-green.svg\" alt=\"7-pages\" align=\"top\"> [Learning to schedule straight-line code](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F1349-learning-to-schedule-straight-line-code.pdf) - J. Eliot B. Moss, Paul E. Utgoff, John Cavazos, Doina Precup, Darko Stefanovic, Carla E. Brodley, and David Scheeff. NeurIPS 1998. \n\n#### Auto-tuning and Design Space Exploration\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"27-pages\" align=\"top\"> [REASONING COMPILER: LLM-Guided Optimizations for Efficient Model Serving](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2025\u002Floc\u002Fsan-diego\u002Fposter\u002F120162) - Annabelle Sujun Tang, Christopher Priebe, Rohan Mahapatra, Lianhui Qin, Hadi Esmaeilzadeh. NeurIPS 2025.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [Compiler-R1: Towards Agentic Compiler Auto-tuning with Reinforcement Learning](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2025\u002Floc\u002Fsan-diego\u002Fposter\u002F115582) - Haolin Pan, Hongyu Lin, Haoran Luo, Yang Liu, Kaichun Yao, Libo Zhang, Mingjie Xing, Yanjun Wu. NeurIPS 2025.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [pyATF: Constraint-Based Auto-Tuning in Python](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3708493.3712682) - Richard Schulze , Sergei Gorlatch , Ari Rasch. CC 2025. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [IntelliGen: Instruction-Level Auto-tuning for Tensor Program with Monotonic Memory Optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696443.3708967) -  Zixuan Ma, Haojie Wang, Jingze Xing, Shuhong Huang, Liyan Zheng, Chen Zhang, Huanqi Cao, Kezhao Huang, Mingshu Zhai, Shizhi Tang, Penghan Wang, Jidong Zhai. CGO 2025.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Accelerated Auto-Tuning of GPU Kernels for Tensor Computations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3650200.3656626) -  Chendi Li and Yufan Xu and Sina Mahdipour Saravani and P. Sadayappan. ICS 2024.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Revealing Compiler Heuristics through Automated Discovery and Optimization](https:\u002F\u002Fwww.research.ed.ac.uk\u002Ffiles\u002F389049758\u002FRevealing_computer_heuristics_SEEKER_DOA31072023_AFV_CC_BY.pdf) -  Volker Seeker, Chris Cummins, Murray Cole, Björn Franke, Kim Hazelwood, Hugh Leather. CGO 2024.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F29-pages-green.svg\" alt=\"29-pages\" align=\"top\"> [The Droplet Search Algorithm for Kernel Scheduling](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3650109) -  Michael Canesche, Vanderson M. Rosario, Edson Borin, Fernando Magno Quintão Pereira. ACM TACO 2024\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F24-pages-green.svg\" alt=\"24-pages\" align=\"top\"> [BaCO: A Fast and Portable Bayesian Compiler Optimization Framework](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.11142.pdf) - Erik Hellsten, Artur Souza, Johannes Lenfers, Rubens Lacouture, Olivia Hsu, Adel Ejjeh, Fredrik Kjolstad, Michel Steuwer, Kunle Olukotun, Luigi Nardi. ASPLOS 2024. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [(De\u002FRe)-Compositions Expressed Systematically via MDH-Based Schedules](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3578360.3580269) - Ari Rasch , Richard Schulze , Denys Shabalin , Anne Elster , Sergei Gorlatch , Mary Hall. CC 2023. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F23-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [Autotuning Convolutions is Easier Than You Think](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3570641) - Nicolas Tollenaere , Guillaume Iooss , Stéphane Pouget , Hugo Brunie , Christophe Guillon , Albert Cohen , P. Sadayappan , Fabrice Rastello. ACM TACO 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3559009.3569682) - Perry Gibson, Jose Cano. PACT 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [Glimpse: Mathematical Embedding of Hardware Specification for Neural Compilation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3489517.3530590) - Byung Hoon Ahn, Sean Kinzer, Hadi Esmaeilzadeh. DAC 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [One-shot tuner for deep learning compilers](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3497776.3517774) - Jaehun Ryu, Eunhyeok Park,  Hyojin Sung. CC 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [A Flexible Approach to Autotuning Multi-Pass Machine Learning Compilers](https:\u002F\u002Fmangpo.net\u002Fpapers\u002Fxla-autotuning-pact2021.pdf) - Phitchaya Mangpo Phothilimthana, Amit Sabne, Nikhil Sarda, Karthik Srinivasa Murthy, Yanqi Zhou, Christof Angermueller, Mike Burrows, Sudip Roy, Ketan Mandke, Rezsa Farahani, Yu Emma Wang, Berkin Ilbeyi, Blake Hechtman, Bjarke Roune, Shen Wang, Yuanzhong Xu, and Samuel J. Kaufman. PACT 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [TASO: Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions]([https:\u002F\u002Fmangpo.net\u002Fpapers\u002Fxla-autotuning-pact2021.pdf](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3341301.3359630)) - Zhihao Jia, Oded Padon, James Thomas, Todd Warszawski, Matei Zaharia, and Alex Aiken. ACM SOSP 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Value Learning for Throughput Optimization of Deep Neural Workloads](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Ffile\u002F73278a4a86960eeb576a8fd4c9ec6997-Paper.pdf) - Benoit Steiner, Chris Cummins, Horace He, Hugh Leather. MLSys 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [DynaTune: Dynamic Tensor Program Optimization in Deep Neural NetworkCompilation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=GTGb3M_KcUl) - Minjia Zhang, Menghao Li, Chi Wang, Mingqin Li. ICLR 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning](https:\u002F\u002Fopenreview.net\u002Fpdf?id=-6vS_4Kfz0) - Shauharda Khadka, Estelle Aflalo, Mattias Mardar, Avrech Ben-David, Santiago Miret, Shie Mannor, Tamir Hazan, Hanlin Tang, Somdeb Majumdar. ICLR 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [GPTune: Multitask Learning for Autotuning Exascale Applications](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3437801.3441621) - Yang Liu, Wissam M. Sid-Lakhdar,  Osni Marques,  Xinran Zhu,  Chang Meng,  James W. Demmel, Xiaoye S. Li. PPoPP 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [ApproxTuner: A Compiler and Runtime System for Adaptive Approximations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3437801.3446108) - Hashim Sharif,  Yifan Zhao,  Maria Kotsifakou,  Akash Kothari,  Ben Schreiber,  Elizabeth Wang,  Yasmin Sarita,  Nathan Zhao,  Keyur Joshi,  Vikram S. Adve,  Sasa Misailovic,  Sarita Adve. PPoPP 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [Efficient Auto-Tuning of Parallel Programs with Interdependent Tuning Parameters via Auto-Tuning Framework (ATF)\n](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3427093) - Ari Rasch , Richard Schulze , Michel Steuwer , Sergei Gorlatch. ACM TACO 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation](https:\u002F\u002Fopenreview.net\u002Fpdf?id=rygG4AVFvH) - Byung Hoon Ahn, Prannoy Pilligundla, Amir Yazdanbakhsh, Hadi Esmaeilzadeh. ICLR 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [Ansor: Generating High-Performance Tensor Programs for Deep Learning](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi20-zheng.pdf) - Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica. OSDI 2020. ([slides](https:\u002F\u002Fwww.usenix.org\u002Fsites\u002Fdefault\u002Ffiles\u002Fconference\u002Fprotected-files\u002Fosdi20_slides_zheng.pdf), [presentation](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=A2hJ_Mj02zk))\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [A Pattern Based Algorithmic Autotuner for Graph Processing on GPUs](https:\u002F\u002Freadingxtra.github.io\u002Fdocs\u002Fgpu-graph\u002FMengPPoPP2019.pdf) - Ke Meng, Jiajia Li, Guangming Tan, Ninghui Sun. PPoPP 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.03443.pdf) - Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, Kurt Keutzer. CVPR 2019. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [TVM: An automated end-to-end optimizing compiler for deep learning](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi18-chen.pdf) - Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan et al., OSDI 2018\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [BOAT: Building auto-tuners with structured Bayesian optimization](https:\u002F\u002Fwww.cl.cam.ac.uk\u002F~ey204\u002Fteaching\u002FACS\u002FR244_2018_2019\u002Fpapers\u002Fdalibard_WWW_2017.pdf) - Valentin Dalibard, Michael Schaarschmidt, and Eiko Yoneki, WWW 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Cobayn: Compiler autotuning framework using bayesian networks](https:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2014\u002Fansel-pact14-opentuner.pdf) - Amir H. Ashouri, Giovanni Mariani, Gianluca Palermo, Eunjung Park, John Cavazos, and Cristina Silvano, ACM Transactions on Architecture and Code Optimization (TACO), 2016. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Autotuning algorithmic choice for input sensitivity](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2015\u002Fyding-pldi15-pbinput.pdf) - Yufei Ding, Jason Ansel, Kalyan Veeramachaneni, Xipeng Shen, Una-May O'Reilly, and Saman Amarasinghe. PLDI 2015\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [Fast: A fast stencil autotuning framework based on an optimal-solution space model](http:\u002F\u002Fwww.elbagarza.com\u002Fpdfs\u002Fjia_2015_gpu.pdf) - Yulong Luo, Guangming Tan, Zeyao Mo, and \nNinghui Sun. ACM Transactions on Architecture and Code Optimization (TACO), 2015. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [GPU performance and power tuning using regression trees](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F2751205.2751214) - Wenhao Jia, Elba Garza, Kelly A. Shaw, and Margaret Martonosi. SC 2015.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [Reinforcement learning-based inter-and intra-application thermal optimization for lifetime improvement of multicore systems](https:\u002F\u002Fcfaed.tu-dresden.de\u002Ffiles\u002Fuser\u002Fakumar\u002Fpdf\u002Fdac2014.pdf) - Anup K Das, Rishad Ahmed Shafik, Geoff V Merrett, Bashir M Al-Hashimi, Akash Kumar, Bharadwaj Veeravalli. DAC 2014\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Opentuner: An extensible framework for program autotuning](https:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2014\u002Fansel-pact14-opentuner.pdf) - Jason Ansel, Shoaib Kamil, Kalyan Veeramachaneni, Jonathan Ragan-Kelley, Jeffrey Bosboom, Una-May O'Reilly, and Saman Amarasinghe. PACT 2014\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Taming parallel I\u002FO complexity with auto-tuning](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.714.1995&rep=rep1&type=pdf) - Babak Behzad, Huong Vu Thanh Luu, Joseph Huchette, Surendra Byna, Ruth Aydt, Quincey Koziol, and Marc Snir. SC 2013.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [A multi-objective auto-tuning framework for parallel codes](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FPhilipp_Gschwandtner\u002Fpublication\u002F235436717_A_multi-objective_auto-tuning_framework_for_parallel_codes\u002Flinks\u002F55b5d86b08aed621de02f1d9\u002FA-multi-objective-auto-tuning-framework-for-parallel-codes.pdf) - Herbert Jordan, Peter Thoman, Juan J. Durillo, Simone Pellegrini, Philipp Gschwandtner, Thomas Fahringer, and Hans Moritsch. SC 2012.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F8-pages-green.svg\" alt=\"8-pages\" align=\"top\"> [Bandit-based optimization on graphs with application to library performance tuning](https:\u002F\u002Fwww.icml.cc\u002FConferences\u002F2009\u002Fpapers\u002F494.pdf) - Frédéric De Mesmay, Arpad Rimmel, Yevgen Voronenko, and Markus Püschel. ICML 2009.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Combining models and guided empirical search to optimize for multiple levels of the memory hierarchy](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.532.9511&rep=rep1&type=pdf) - Chun Chen, Jacqueline Chame, and Mary Hall. CGO 2005\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Active harmony: towards automated performance tuning](http:\u002F\u002Fwww.cs.umd.edu\u002F~hollings\u002Fpapers\u002Fsc02a.pdf) - Cristian Tapus , I-Hsin Chung , Jeffrey K. Hollingsworth. SC 2002\n\n#### Parallelism Mapping and Task Scheduling\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Exploration of Convolutional Neural Network models for source code classification](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.engappai.2020.104075) - Francesco Barchi, Emanuele Parisi, Gianvito Urgese, Elisa Ficarra, and Andrea Acquaviva. Engineering Applications of Artificial Intelligence, January 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Autopilot: workload autoscaling at Google](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3342195.3387524) - Krzysztof Rzadca, Pawel Findeisen, Jacek Swiderski, Przemyslaw Zych, Przemyslaw Broniek, Jarek Kusmierek, Pawel Nowak, Beata Strack, Piotr Witusowski, Steven Hand, John Wilkes. EuroSys 2020. [slides](https:\u002F\u002Fwww.eurosys2020.org\u002Fwp-content\u002Fuploads\u002F2020\u002F04\u002Fslides\u002F149_rzadca_slides.pdf)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Modeling and optimizing NUMA effects and prefetching with machine learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3392717.3392765) - Isaac Sánchez Barrera, David Black-Schaffer, Marc Casas, Miquel Moretó, Anastasiia Stupnikova, and Mihail Popov. ICS 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Poise: Balancing thread-level parallelism and memory system performance in GPUs using machine learning](https:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fvnagaraj\u002Fpapers\u002Fhpca19.pdf) - Saumay Dublish, Vijay Nagarajan, and Nigel Tophama. HPCA 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Data and thread placement in NUMA architectures: A statistical learning approach](https:\u002F\u002Fwww.mcs.anl.gov\u002Fresearch\u002Fprojects\u002Fargo\u002Fpublications\u002F2019-icpp-denoyelle.pdf) - Nicolas Denoyelle, Brice Goglin, Emmanuel Jeannot, and Thomas Ropars. ICPP 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [Code Mapping in Heterogeneous Platforms Using Deep Learning and LLVM-IR](https:\u002F\u002Firis.polito.it\u002Fretrieve\u002Fhandle\u002F11583\u002F2726074\u002F327896\u002Fdocument_post_print.pdf) - Francesco Barchi, Gianvito Urgese, Enrico Macii, and Andrea Acquaviva. DAC 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Adaptive optimization for OpenCL programs on embedded heterogeneous systems](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F83920402.pdf) - Ben Taylor, Vicent Sanz Marco, and Zheng Wang. LCTES 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Improving spark application throughput via memory aware task co-location: A mixture of experts approach](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fmiddleware17.pdf) - Vicent Sanz Marco, Ben Taylor, Barry Porter, and Zheng Wang. Middleware 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Smart multi-task scheduling for OpenCL programs on CPU\u002FGPU heterogeneous platforms](http:\u002F\u002Fwww.lancaster.ac.uk\u002Fstaff\u002Fwangz3\u002Fpublications\u002Fhipc14.pdf) - Yuan Wen, Zheng Wang, and Michael FP O'Boyle. HiPC 2015.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Quasar: resource-efficient and QoS-aware cluster management](http:\u002F\u002Fcsl.stanford.edu\u002F~christos\u002Fpublications\u002F2014.quasar.asplos.pdf) - Christina Delimitrou, and Christos Kozyrakis. ASPLOS 2014.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Automatic and portable mapping of data parallel programs to opencl for gpu-based heterogeneous systems](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fzheng_taco_2015.pdf) - Zheng Wang, Dominik Grewe, and Michael O'boyle. ACM Transactions on Architecture and Code Optimization (TACO), 2014.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [Integrating Profile-Driven Parallelism Detection and Machine-Learning-Based Mapping](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Ftaco14.pdf) - Zheng Wang, Georgios Tournavitis, Björn Franke, and Michael FP O'boyle. ACM Transactions on Architecture and Code Optimization (TACO), 2014.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Portable Performance on Heterogeneous Architectures](https:\u002F\u002Fmangpo.net\u002Fpapers\u002Fpbgpu-asplos13.pdf) - Phitchaya Mangpo Phothilimthana, Jason Ansel,  Jonathan Ragan-Kelley,  Saman Amarasinghe. ASPLOS 2013.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Smart, adaptive mapping of parallelism in the presence of external workload](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1109\u002FCGO.2013.6495010) - Murali Krishna Emani, Zheng Wang, and Michael O'Boyle. CGO 2013.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Partitioning streaming parallelism for multi-cores: a machine learning based approach](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fpact10.pdf) - Zheng Wang and Michael O'Boyle. PACT 2010.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Qilin: exploiting parallelism on heterogeneous multiprocessors with adaptive mapping](http:\u002F\u002Fwww.sphong.net\u002FMICRO_2009.pdf) - Chi-Keung Luk, Sunpyo Hong, and Hyesoon Kim. MICRO 2009.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Mapping parallelism to multi-cores: a machine learning based approach](http:\u002F\u002Fllvm.org\u002Fpubs\u002F2009-02-PPoPP-MappingParallelism.pdf) - Zheng Wang and Michael O'Boyle. PPoPP 2009. \n\n\n#### Domain-specific Optimisation\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Seer: Predictive Runtime Kernel Selection for Irregular Problems](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10444812) -  Ryan Swann, Muhammad Osama, Karthik Sangaiah, Jalal Mahmud. CGO 2024\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [Tensor Program Optimization with Probabilistic Programs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.13603.pdf) -  Junru Shao, Xiyou Zhou, Siyuan Feng, Bohan Hou, Ruihang Lai, Hongyi Jin, Wuwei Lin, Masahiro Masuda, Cody Hao Yu, Tianqi Chen. NeurIPS 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [moTuner: a compiler-based auto-tuning approach for mixed-precision operators](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3528416.3530231) -  Zewei Mo, Zejia Lin, Xianwei Zhang, Yutong Lu. CF 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Collage: Automated Integration of Deep Learning Backends](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3559009.3569651) - Byungsoo Jeon, Sunghyun Park, Peiyuan Liao, Sheng Xu, Tianqi Chen, Zhihao Jia. PACT 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [Learning Nonlinear Loop Invariants with Gated Continuous Logic Networks](https:\u002F\u002Fwww.cs.columbia.edu\u002F~rgu\u002Fpublications\u002Fpldi20-yao.pdf) - J. Yao, G. Ryan, J. Wong, S. Jana, and R. Gu. PLDI 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Learning-based Memory Allocation for C++ Server Workloads](https:\u002F\u002Fwww.cs.utexas.edu\u002Fusers\u002Fmckinley\u002Fpapers\u002Fllama-asplos-2020.pdf) - Maas, Martin, David G. Andersen, Michael Isard, Mohammad Mahdi Javanmard, Kathryn S. McKinley, and Colin Raffel. ASPLOS 2020. [presetnation](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=gs8m5W-xdDM&feature=emb_title)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [Bridging the gap between deep learning and sparse matrix format selection](https:\u002F\u002Fpeople.engr.ncsu.edu\u002Fxshen5\u002FPublications\u002Fppopp18.pdf) - Yue Zhao, Jiajia Li, Chunhua Liao and Xipeng Shen. PPoPP 2018. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Camel: Smart, Adaptive Energy Optimization for Mobile Web Interactions](http:\u002F\u002Feprints.whiterose.ac.uk\u002F155720\u002F1\u002Fpaper.pdf) - Jie Ren, Y. Lu, Petteri Nurmi, Xiaoming Wang, Miao Ma, Ling Gao, Zhanyong Tang, Jie Zheng, and Zheng Wang. INFOCOM 2020. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Optimizing sorting with genetic algorithms](http:\u002F\u002Fpolaris.cs.uiuc.edu\u002F~garzaran\u002Fdoc\u002Fcgo05.pdf) - Xiaoming Li, Maria Jesus Garzaran, and David Padua. CGO 2005. \n\n#### Languages and Compilation\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [QiMeng-Xpiler: Transcompiling Tensor Programs for Deep Learning Systems with a Neural-Symbolic Approach](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi25-dong.pdf) - Shouyang Dong, Yuanbo Wen, Jun Bi, Di Huang, Jiaming Guo, Jianxing Xu, Ruibai Xu, Xinkai Song, Yifan Hao, Ling Li, Xuehai Zhou, Tianshi Chen, Qi Guo, Yunji Chen, OSDI 2025. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F74-pages-green.svg\" alt=\"74-pages\" align=\"top\"> [(De\u002FRe)-Composition of Data-Parallel Computations via Multi-Dimensional Homomorphisms](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3665643) - Ari Rasch, TOPLAS 2024. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F20024748.pdf) - Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe, PLDI 2013. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [PetaBricks: a language and compiler for algorithmic choice](http:\u002F\u002Fpeople.csail.mit.edu\u002Fcychan\u002Fpapers\u002F2009pldi-petabricks.pdf) - Jason Ansel, Cy Chan, Yee Lok Wong, Marek Olszewski, Qin Zhao, Alan Edelman, and Saman Amarasinghe. PLDI 2009.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F29-pages-green.svg\" alt=\"29-pages\" align=\"top\"> [Achieving High-performance the Functional Way: a Functional Pearl on Expressing High-performance Optimizations as Rewrite Strategies](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3408974) - Bastian Hagedorn, Johannes Lenfers, Thomas K{\\oe}hler, Xueying Qin, Sergei Gorlatch, and Michel Steuwer. Proceedings of the ACM on Programming Languages 2020.\n\n#### Code Size Reduction\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [Learning Compiler Pass Orders using Coreset and Normalized Value Prediction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.05104.pdf) - Youwei Liang, Kevin Stone, Ali Shameli, Chris Cummins, Mostafa Elhoushi, Jiadong Guo, Benoit Steiner, Xiaomeng Yang, Pengtao Xie, Hugh Leather, Yuandong Tian. ICML 2023.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [POSET-RL: Phase ordering for Optimizing Size and Execution Time using Reinforcement Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9804673) - Shalini Jain, Yashas Andaluri, S. VenkataKeerthy, Ramakrishna Upadrasta. ISPASS 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Exploring the space of optimization sequences for code-size reduction: insights and tools](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3446804.3446849) - Anderson Faustino da Silva, Bernardo N. B. de Lima, and Fernando Magno Quintao Pereira. CC 2021. [Code and Data](https:\u002F\u002Fzenodo.org\u002Frecord\u002F4416117)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [Using machine learning to predict the code size impact of duplication heuristics in a dynamic compiler](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3475738.3480943?sid=SCITRUS) - Raphael Mosaner, David Leopoldseder, Lukas Stadler, and Hanspeter Mössenböck. MPLR 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [ANGHABENCH: a Suite with One Million\nCompilable C Benchmarks for Code-Size Reduction](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FFaustinoCGO21.pdf) - Anderson Faustino da Silva, Bruno Conde Kind, Jose Wesley de Souza Magalhaes, Jeronimo Nunes Rocha, Breno Campos Ferreira Guimaraes, Fernando Magno Quintao Pereira. CGO 2021. [Code and Data](http:\u002F\u002Fcuda.dcc.ufmg.br\u002Fangha\u002Fhome)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F7-pages-green.svg\" alt=\"7-pages\" align=\"top\"> [Reinforcement Learning Guided Software Debloating](http:\u002F\u002Fwww.csl.sri.com\u002Fusers\u002Fgehani\u002Fpapers\u002FMLSys-2019.DeepOCCAM.pdf) - Nham Le Van, Ashish Gehani, Arie Gurfinkel, Susmit Jha, and Jorge A. Navas. MLSys 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [Optimizing for reduced code space using genetic algorithms](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.13.1586&rep=rep1&type=pdf) - Keith D. Cooper, Philip J. Schielke, and Devika Subramanian. LCTES 1999. \n\n#### Cost and Performance Models\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [TLP: A Deep Learning-Based Cost Model for Tensor Program Tuning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.03578) - Yi Zhai, Yu Zhang, Shuo Liu, Xiaomeng Chu, Jie Peng, Jianmin Ji, Yanyong Zhang, ASPLOS, 2023. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Performance-Detective: Automatic Deduction of Cheap and Accurate Performance Models](https:\u002F\u002Fmcopik.github.io\u002Fassets\u002Fpdf\u002F2022_ics_schmid_perf_detective.pdf) - Larissa Schmid, Marcin Copik, Alexandru Calotoiu, Dominik Werle, Andreas Reiter, Michael Selzer, Anne Koziolek, Torsten Hoefler, ICS, 2022. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Neural Network-based Performance Prediction for Task Migration on S-NUCA Many-Cores](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9190026) - Martin Rapp, Anuj Pathania, Tulika Mitra, Jörg Henkel, IEEE Transactions on Computers, 2021. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [A Deep Learning Based Cost Model for Automatic Code Optimization](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Ffile\u002F3def184ad8f4755ff269862ea77393dd-Paper.pdf) - Riyadh Baghdadi, Massinissa Merouani, Mohamed-Hicham LEGHETTAS, Kamel Abdous, Taha Arbaoui, Karima BENATCHBA, Saman amarasinghe, MLSys 2021\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Comparative Code Structure Analysis using Deep Learning for Performance Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.07660) - Nathan Pinnow, Tarek Ramadan, Tanzima Z. Islam, Chase Phelps, Jayaraman J. Thiagarajan, ISPASS 2021\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [Extracting Clean Performance Models from Tainted Programs](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9139798) - Marcin Copik, Alexandru Calotoiu,  Tobias Grosser,  Nicolas Wicki,  Felix Wolf,  Torsten Hoefler. PPoPP 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [PMEvo: Portable Inference of Port Mappings for Out-of-Order Processors by Evolutionary Optimization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10044.pdf) - Fabian Ritter, Sebastian Hack. PLDI 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [An Active Learning Method for Empirical Modeling in Performance Tuning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9139798) - Jiepeng Zhang, Jingwei Sun, Wenju Zhou, Guangzhong Sun. IPDPS 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Learning to Optimize Halide with Tree Search and Random Programs](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3306346.3322967) - Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michael Gharbi, Benoit Steiner, Steven Johson, Kayvon Fatahalian, Fredo Durand, Jonathan Ragan-Kelley. ACM Trans Graph, 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Ithemal: Accurate, portable and fast basic block throughput estimation using deep neural networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fmendis19a\u002Fmendis19a.pdf) - Charith Mendis, Alex Renda, Saman Amarasinghe, and Michael Carbin. ICML 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Absinthe: Learning an Analytical Performance Model to Fuse and Tile Stencil Codes in One Shot](http:\u002F\u002Funixer.de\u002Fpublications\u002Fimg\u002Fgysi-absinthe.pdf) - Tobias Gysi, Tobias Grosser, and Torsten Hoefler. PACT 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F22-pages-green.svg\" alt=\"22-pages\" align=\"top\"> [Predicting new workload or CPU performance by analyzing public datasets](https:\u002F\u002Fyuemmawang.github.io\u002Fpublications\u002Fwang-taco2019.pdf) - Yu Wang, Victor Lee, Gu-Yeon Wei, and David Brooks. ACM Transactions on Architecture and Code Optimization (TACO), 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Automatic creation of tile size selection models](http:\u002F\u002Fpeople.rennes.inria.fr\u002FTomofumi.Yuki\u002Fpapers\u002Fyuki-cgo2010.pdf) - Tomofumi Yuki, Lakshminarayanan Renganarayanan, Sanjay Rajopadhye, Charles Anderson, Alexandre E. Eichenberger, and Kevin O'Brien. CGO 2010. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Microarchitecture sensitive empirical models for compiler optimizations](https:\u002F\u002Fwww.csa.iisc.ac.in\u002F~srikant\u002Fpapers-theses\u002Fkapil-CGO-2007.pdf) - Kapil Vaswani, Matthew J. Thazhuthaveetil, Y. N. Srikant, and P. J. Joseph. CGO 2007. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Accurate static estimators for program optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F178243.178251) - Tim A. Wagner, Vance Maverick, Susan L. Graham, and Michael A. Harrison. PLDI 1994. \n\n#### Learning Program Representation\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Performance Embeddings: A Similarity-Based Transfer Tuning Approach to Performance Optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3533767.3534383) -  L Trümper, T Ben-Nun, P Schaad, A Calotoiu, T Hoefler. ICS 2023.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Improving cross-platform binary analysis using representation learning via graph alignment](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3533767.3534383) -  Geunwoo Kim,  Sanghyun Hong, Michael Franz,  Dokyung Song. ISSTA 2022.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F46-pages-green.svg\" alt=\"46-pages\" align=\"top\"> [Program Representations for Predictive Compilation: State of Affairs in the Early 20's](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FFaustinoJCL22.pdf) - Anderson Faustino da Silva, Edson Borin, Fernando Magno Quintao Pereira, Nilton Luiz Queiroz Junior and Otavio Oliveira Napoli. JCL 2022. [Code and Data](https:\u002F\u002Fgithub.com\u002Fotavioon\u002FCOLA-2022-Tools)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Comparative Code Structure Analysis using Deep Learning for Performance Prediction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.07660.pdf) - DNathan Pinnow, Tarek Ramadan, Tanzima Z. Islam, Chase Phelps, Jayaraman J. Thiagarajan. ISPASS 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [GraphCodeBERT: Pre-training Code Representations with Data Flow ](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.08366.pdf) - Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie LIU, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou. ICLR 2021.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [CodeBERT:A Pre-Trained Model for Programming and Natural Languages](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.findings-emnlp.139.pdf) - Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou. EMNLP 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"27-pages\" align=\"top\"> [IR2VEC: LLVM IR Based Scalable Program Embeddings](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3418463) - S. VenkataKeerthy, Rohit Aggarwal, Shalini Jain, Maunendra Sankar Desarkar, Ramakrishna Upadrasta and Y. N. Srikant. TACO 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Deep Program Structure Modeling Through Multi-Relational Graph-based Learning](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fpact20.pdf) - Guixin Ye, Zhanyong Tang, Huanting Wang, Jianbin Fang, Songfang Huang and Zheng Wang. PACT 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Global Relational Models of Source Code](https:\u002F\u002Fopenreview.net\u002Fpdf?id=B1lnbRNtwr) - Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, David Bieber, ICLR 2020. ([Data and Code](https:\u002F\u002Fgithub.com\u002FVHellendoorn\u002FICLR20-Great))\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F45-pages-green.svg\" alt=\"45-pages\" align=\"top\"> [Learning Semantic Program Embeddings with Graph Interval Neural Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.09997.pdf) - Yu Wang, Ke Wang, Fengjuan Gao, and Linzhang Wang. OOPSLA 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"27-pages\" align=\"top\"> [Flow2Vec: Value-Flow-Based Precise Code Embedding](https:\u002F\u002Fyuleisui.github.io\u002Fpublications\u002Foopsla20.pdf) - Yulei Sui, Xiao Cheng, Guanqin Zhang and Haoyu Wang. OOPSLA 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F23-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [MISIM: An End-to-End Neural Code Similarity System](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.05265.pdf) - Fangke Ye, Shengtian Zhou, Anand Venkat, Ryan Marcus, Nesime Tatbul, Jesmin Jahan Tithi, Paul Petersen, Timothy Mattson, Tim Kraska, Pradeep Dubey, Vivek Sarkar and Justin Gottschlich . arXiv 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Blended, precise semantic program embeddings](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3385412.3385999) - Ke Wang and Zhendong Su. PLDI 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [LambdaNet: Probabilistic Type Inference using Graph Neural Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.02161.pdf) - Jiayi Wei, Maruth Goyal, Greg Durrett, and Isil Dillig. ICLR 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Compiler-based graph representations for deep learning models of code](https:\u002F\u002Fcfaed.tu-dresden.de\u002Ffiles\u002FImages\u002Fpeople\u002Fchair-cc\u002Fpublications\u002F2002_Brauckmann_CC.pdf) - Alexander Brauckmann, Andrés Goens, Sebastian Ertel, and Jeronimo Castrillon. CC 2020.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F21-pages-green.svg\" alt=\"24-pages\" align=\"top\"> [Generative Code Modeling with Graphs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.08490.pdf) - Marc Brockschmidt, Miltos Allamanis, Alexander L. Gaunt, and Oleksandr Polozov. ICLR 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F22-pages-green.svg\" alt=\"22-pages\" align=\"top\"> [code2seq: Generating sequences from structured representations of code](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.01400) - Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. ICLR 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F29-pages-green.svg\" alt=\"29-pages\" align=\"top\"> [code2vec: Learning distributed representations of code](http:\u002F\u002Fwww.cs.technion.ac.il\u002F~mbs\u002Fpublications\u002Fcode2vec-popl19.pdf) - Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. POPL 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [COSET: A Benchmark for Evaluating Neural Program Embeddings](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.11445.pdf) - Ke Wang, Mihai Christodorescu. arXiv 2019.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Learning to Represent Programs with Graphs](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fwp-content\u002Fuploads\u002F2017\u002F11\u002FprogramGraphs.pdf) - Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. ICLR 2018.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Neural Code Comprehension: A Learnable Representation of Code Semantics](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7617-neural-code-comprehension-a-learnable-representation-of-code-semantics.pdf) - Tal Ben-Nun, Alice Shoshana Jakobovits, and Torsten Hoefler. NeurIPS 2018.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [End-to-end deep learning of optimization heuristics](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fhleather\u002Fpublications\u002F2017-deepopt-pact.pdf) - Chris Cummins, Pavlos Petoumenos, Zheng Wang, and Hugh Leather ([slides](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fend-to-end-deep-learning-of-optimization-heuristics-pact-17)). PACT 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [Semantic-aware program sampling](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fwp-content\u002Fuploads\u002F2017\u002F11\u002Fnips_2017.pdf) - Pratiksha Thaker, Daniel Tarlow, and Marc Brockschmidt. NeurIPS 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F20-pages-green.svg\" alt=\"20-pages\" align=\"top\"> [DeepCoder: Learning to write programs](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fuploads\u002Fprod\u002F2017\u002F03\u002Fmain.pdf) - Matej Balog, Alexander L. Gaunt, Marc Brockschmidt,\nSebastian Nowozin, and Daniel Tarlow. ICLR 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F7-pages-green.svg\" alt=\"7-pages\" align=\"top\"> [Convolutional neural networks over tree structures for programming language processing](http:\u002F\u002Fsei.pku.edu.cn\u002F~zhanglu\u002FDownload\u002FAAAI16.pdf) - Lili Mou, Ge Li, Lu Zhang, Tao Wang, and Zhi Jin. AAAI 2016.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [A Convolutional Attention Network\nfor Extreme Summarization of Source Code](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fallamanis16.pdf) - Miltos Allamanis, Hao Peng, and Charles Sutton. ICML 2016.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [Structured Generative Models of Natural Source Code](http:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fmaddison14.pdf) - Chris Maddison and Daniel Tarlow. ICML 2014.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Using graph-based program characterization for predictive modeling](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fcgo-2012.pdf) - Eunjung Park, John Cavazos, and Marco A. Alvarez. CGO 2011.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Automatic feature generation for machine learning based optimizing compilation](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fhleather\u002Fpublications\u002F2009_autofeatures_cgo.pdf) - Hugh Leather, Edwin Bonilla, and Michael O'Boyle. CGO 2009.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [A Game-Based Framework to Compare Program Classifiers and Evaders](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FCGO23_ThaisDamasio.pdf) - Thais Damasio, Michael Canesche, Vinicius Pacheco, Anderson Faustino da Silva, Marcus Botacin and Fernando Magno Quintao Pereira. CGO 2023. [Code and Data](https:\u002F\u002Fzenodo.org\u002Frecord\u002F7374649)\n   \n#### ML for Compilers and Systems Optimisation\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [DFA-Net: A Compiler-Specific Neural Architecture for Robust Generalization in Data Flow Analyses](https:\u002F\u002Fcfaed.tu-dresden.de\u002Ffiles\u002FImages\u002Fpeople\u002Fchair-cc\u002Fpublications\u002F2503_Brauckmann_CC.pdf) - Alexander Brauckmann, Anderson Faustino da Silva, Gabriel Synnaeve, Michael FP O’Boyle, Jeronimo Castrillon, Hugh Leather. CC 2025.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Reductive Analysis with Compiler-Guided Large Language Models for Input-Centric Code Optimizations](https:\u002F\u002Fresearch.csc.ncsu.edu\u002Fpicture\u002Fpublications\u002Fpapers\u002Fpldi2025) - Xiangwei Wang, Xinning Hui, Chunhua Liao, Xipeng Shen. PLDI 2025.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Enhancing Deployment-Time Predictive Model Robustness for Code Analysis and Optimization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.00298) - Huanting Wang, Patrick Lenihan, Zheng Wang. CGO 2025. ([Code](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FPROM\u002F))\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F19-pages-green.svg\" alt=\"19-pages\" align=\"top\"> [The MLIR Transform Dialect - Your compiler is more powerful than you think](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.03864) - Martin Paul Lücke, Oleksandr Zinenko, William S. Moses, Michel Steuwer, Albert Cohen. Arxiv 2024.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F33-pages-green.svg\" alt=\"33-pages\" align=\"top\"> [Meta Large Language Model Compiler: Foundation Models of Compiler Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02524) - Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, Hugh Leather. Arxiv 2024.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [The Next 700 ML-Enabled Compiler Optimizations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.10800.pdf) - S. VenkataKeerthy, Siddharth Jain, Umesh Kalvakuntla, Pranav Sai Gorantla, Rajiv S Chitale, Eugene Brevdo, Albert Cohen, Mircea Trofin, Ramakrishna Upadrasta. CC 2024. \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [BenchPress: A Deep Active Benchmark Generator\n](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.06555.pdf) - Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather. PACT 2022 ([code](https:\u002F\u002Fgithub.com\u002Ffivosts\u002FBenchPress))\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Automating Reinforcement Learning Architecture Design for Code Optimization](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fcc22.pdf) - Huanting Wang, Zhanyong Tang, Cheng Zhang, Jiaqi Zhao, Chris Cummins, Hugh Leather, Zheng Wang. CC 2022 ([code](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FSUPERSONIC))\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Learning Semantic Representations to Verify Hardware Designs]([https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fcc22.pdf](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc5aa65949d20f6b20e1a922c13d974e7-Paper.pdf)) - Shobha Vasudevan, Wenjie (Joe) Jiang, David Bieber, Rishabh Singh, hamid shojaei, C. Richard Ho, Charles Sutton. NeurIPS 2021\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F43-pages-green.svg\" alt=\"43-pages\" align=\"top\"> [Composable and Modular Code Generation in MLIR: A Structured and Retargetable Approach to Tensor Compiler Construction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.03293.pdf) - Nicolas Vasilache, Oleksandr Zinenko, Aart J.C. Bik, Mahesh Ravishankar, Thomas Raoux, Alexander Belyaev, Matthias Springer, Tobias Gysi, Diego Caballero, Stephan Herhut, Stella Laurenzo, Albert Cohen. arXiV 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Deep NLP-based co-evolvement for synthesizing code analysis from natural language](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3446804.3446852) - Zifan  Nan, Hui  Guan, Xipeng Shen,  Chunhua  Liao. CC 2021\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [MLGO: a Machine Learning Guided Compiler Optimizations Framework](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.04808) - Mircea Trofin, Yundi Qian, Eugene Brevdo, Zinan Lin, Krzysztof Choromanski, David Li. arXiv. [Code](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fml-compiler-opt)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Towards Better Understanding of Black-box Auto-tuning: A Comparative Analysis for Storage Systems](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fatc18\u002Fatc18-cao.pdf) - Zhen Cao, Vasily Tarasov, Sachin Tiwari, and Erez Zadok. ATC 2018.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Synthesizing Benchmarks for Predictive Modeling](https:\u002F\u002Fwww.pure.ed.ac.uk\u002Fws\u002Ffiles\u002F29479104\u002F2017_cgo_1.pdf) - Chris Cummins, Pavlos Petoumenos, Zheng Wang, and Hugh Leather ([slides](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fsynthesizing-benchmarks-for-predictive-modelling-cgo-17)). CGO 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Minimizing the cost of iterative compilation with active learning](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fhleather\u002Fpublications\u002F2017-minimitercomp-cgo.pdf) - William Ogilvie, Pavlos Petoumenos, Zheng Wang, and Hugh Leather. CGO 2017.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F28-pages-green.svg\" alt=\"28-pages\" align=\"top\"> [VESPA: static profiling for binary optimization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3485521) - Angelica Aparecida Moreira, Guilherme Ottoni, and Fernando Magno Quintao Pereira. OOPSLA 2021. [Code and Data](https:\u002F\u002Fzenodo.org\u002Frecord\u002F5502310)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F35-pages-green.svg\" alt=\"35-pages\" align=\"top\"> [Mapping Computations in Heterogeneous Multicore Systems with Statistical Regression on Program Inputs](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FJunioTECS21.pdf) - Junio Cezar Ribeiro Da Silva, Lorena Leao, Vinicius Petrucci, Abdoulaye Gamatie and Fernando Magno Quintao Pereira. TECS 2021.\n\n### Memory\u002FCache Modeling\u002FAnalysis\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Optimizing Memory Mapping Using Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.07440.pdf) - Pengming Wang, Mikita Sazanovich, Berkin Ilbeyi, Phitchaya Mangpo Phothilimthana, Manish Purohit, Han Yang Tay, Ngân Vũ, Miaosen Wang, Cosmin Paduraru, Edouard Leurent, Anton Zhernov, Julian Schrittwieser, Thomas Hubert, Robert Tung, Paula Kurylowicz, Kieran Milan, Oriol Vinyals, Daniel J. Mankowitz. arxiv 2023.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Learning Memory Access Patterns](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fhashemi18a\u002Fhashemi18a.pdf) - Milad Hashemi, Kevin Swersky, Jamie A. Smith, Grant Ayers, Heiner Litz, Jichuan Chang, Christos Kozyrakis, Parthasarathy Ranganathan. ICML 2018\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [Static Prediction of Silent Stores](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3280848) - Fernando Magno Quintao Pereira, Guilherme Vieira Leobas and Abdoulaye Gamatie. TACO 2019. [Code and Data](https:\u002F\u002Fwww.lirmm.fr\u002Fcontinuum-project\u002Fpages\u002Fs3a.html)\n\n## Books\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F118-pages-green.svg\" alt=\"118-pages\" align=\"top\"> [Automatic Tuning of Compilers Using Machine Learning](https:\u002F\u002Flink.springer.com\u002Fbook\u002F10.1007\u002F978-3-319-71489-9) - Amir H. Ashouri, Gianluca Palermo, John Cavazos, and Cristina Silvano. Springer 2018.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F377-pages-green.svg\" alt=\"377-pages\" align=\"top\"> [Software Automatic Tuning - From Concepts to State-of-the-Art Results](https:\u002F\u002Fwww.springer.com\u002Fgp\u002Fbook\u002F9781441969347) - K Naono, K Teranishi, J Cavazos, and R Suda. Springer 2010.\n\n## Talks and Tutorials\n- Tianqi Chen, etc., [MLC: Machine Learning Compiler](https:\u002F\u002Fmlc.ai\u002Findex.html)([GitHub](https:\u002F\u002Fgithub.com\u002Fmlc-ai\u002Fmlc-en)). OcotoML 2022.\n- Saman Amarasinghe, [Compiler 2.0: Using Machine Learning to Modernize Compiler Technology](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=a1w_NKDVdkI). LCTES 2020.\n- Amir Ashouri, [Compiler Autotuning using Machine Learning: A State-of-the-art Review](https:\u002F\u002Fyoutu.be\u002FxNixKfDxDZE) ([slides](http:\u002F\u002Famirashouri.ca\u002Fresources\u002FAmir_CompileAutotuning_Talk_2019_Google.pdf)). Polytechnic University of Milan 2018.\n\n## Software\n- [PROM](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FPROM\u002F) - A Python Toolkit to help identify ML model misprediction after deployment ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.00298)).\n- [ML-Compiler-Bridge](https:\u002F\u002Fgithub.com\u002FIITH-Compilers\u002FML-Compiler-Bridge) - Library to interface Compilers and ML models for ML-Enabled Compiler Optimizations ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.10800.pdf)).\n- [Supersonic](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FSUPERSONIC) - Automate reinforcement learning architecture design ([paper](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fcc22.pdf)).\n- [CompilerGym](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FCompilerGym) - Reinforcement learning environments for compiler optimizations ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.08267.pdf)).\n- [CodeBert](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FCodeBERT) -  pre-trained DNN models for programming languages ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.08155.pdf)).\n- [IR2Vec](https:\u002F\u002Fgithub.com\u002FIITH-Compilers\u002FIR2Vec) - LLVM IR based program embeddings for machine learning ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.06228.pdf)).\n- [programl](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002FProGraML) - LLVM and XLA IR program representation for machine learning ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.10536.pdf)).\n- [NeuroVectorizer](https:\u002F\u002Fgithub.com\u002Fintel\u002Fneuro-vectorizer) - Using deep reinforcement learning (RL) to predict optimal vectorization compiler pragmas ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.13639)).\n- [TVM](https:\u002F\u002Ftvm.apache.org\u002F) - Open Deep Learning Compiler Stack for CPU, GPU and specialized accelerators ([paper](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi18-chen.pdf); [slides](https:\u002F\u002Fwww.usenix.org\u002Fsites\u002Fdefault\u002Ffiles\u002Fconference\u002Fprotected-files\u002Fosdi18_slides_chen.pdf)).\n- [MLC-LLM](https:\u002F\u002Fgithub.com\u002Fmlc-ai\u002Fmlc-llm) - A machine learning compiler and high-performance deployment engine for large language models (Reference techniques: [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.04296), [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.13603) and [TVM](https:\u002F\u002Ftvm.apache.org\u002F)).\n- [clgen](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002Fclgen) - Benchmark generator using LSTMs ([paper](https:\u002F\u002Fchriscummins.cc\u002Fpub\u002F2017-cgo.pdf); [slides](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fsynthesizing-benchmarks-for-predictive-modelling-cgo-17)).\n- [COBAYN](https:\u002F\u002Fgithub.com\u002Famirjamez\u002FCOBAYN) - Compiler Autotuning using BNs ([paper](http:\u002F\u002Famirashouri.ca\u002Fresources\u002FCOBAYN-ashouri_taco16.pdf)).\n- [OpenTuner](https:\u002F\u002Fgithub.com\u002Fjansel\u002Fopentuner) - Framework for building domain-specific multi-objective program autotuners ([paper](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2014\u002Fansel-pact14-opentuner.pdf); [slides](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2014\u002Fansel-pact14-opentuner-slides.pdf))\n- [ONNX-MLIR](http:\u002F\u002Fonnx.ai\u002Fonnx-mlir\u002F) - Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.08272.pdf)).\n- [IREE](https:\u002F\u002Fgithub.com\u002Fopenxla\u002Firee) - A retargetable MLIR-based machine learning compiler and runtime toolkit. \n\n## Benchmarks and Datasets\n- [TenSet: A Large-scale Program Performance Dataset for Learned Tensor Compilers](https:\u002F\u002Fgithub.com\u002Ftlc-pack\u002Ftenset) - A dataset of tensor program performance records for six commonly used hardware platforms ([paper](https:\u002F\u002Fopenreview.net\u002Fpdf?id=aIfp8kLuvc9)).\n- [The Alberta Workloads for the SPEC CPU® 2017 Benchmark Suite](https:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~amaral\u002FAlbertaWorkloadsForSPECCPU2017\u002F) - Additional workloads for the SPEC CPU2017 Benchmark Suite. \n- [Project CodeNet](https:\u002F\u002Fgithub.com\u002FIBM\u002FProject_CodeNet) - Code samples written in 50+ programming languages, annotated with info, such as code size, memory footprint, CPU run time, and status (acceptance\u002Ferror types)\n- [CodeXGLUE](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FCodeXGLUE) - A Machine Learning Benchmark Dataset for Code\nUnderstanding and Generation ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.04664.pdf))\n- [ANGHABENCH](http:\u002F\u002Fcuda.dcc.ufmg.br\u002Fangha\u002Fbenchmarks) - A suite with One Million Compilable C Benchmarks ([paper](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FFaustinoCGO21.pdf))\n- [BHive](https:\u002F\u002Fgithub.com\u002Fithemal\u002Fbhive) - A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models ([paper](https:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F19\u002Fithemal-measurement.pdf)).\n- [cBench](https:\u002F\u002Fctuning.org\u002Fwiki\u002Findex.php\u002FCTools:CBench) - 32 C benchmarks with datasets and driver scripts.\n- [PolyBench](http:\u002F\u002Fweb.cs.ucla.edu\u002F~pouchet\u002Fsoftware\u002Fpolybench\u002F) - 30 Stencil and Linear-algebra benchmarks with datasets and driver scripts. See also: [GPU version](https:\u002F\u002Fgithub.com\u002Fcavazos-lab\u002FPolyBench-ACC), [pre-computed datasets](https:\u002F\u002Fgithub.com\u002Fstefanocereda\u002Fpolybench_data) ([paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3372799.3394361)).\n- [DeepDataFlow](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002FProGraML\u002Fblob\u002Fmaster\u002Fprograml\u002FDocumentation\u002FDataflowDataset.md) - 469k LLVM-IR files and 8.6B data-flow analysis labels for classification ([paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.10536.pdf)).\n- [devmap](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002Fpaper-end2end-dl) - 650 OpenCL benchmark features and CPU\u002FGPU classification labels ([paper](https:\u002F\u002Fchriscummins.cc\u002Fpub\u002F2017-pact.pdf); [slides](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fend-to-end-deep-learning-of-optimization-heuristics-pact-17)).\n\n## Conferences\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI](https:\u002F\u002Fwww.sigplan.org\u002FConferences\u002FPLDI\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [Architectural Support for Programming Languages and Operating Systems, ASPLOS](https:\u002F\u002Fasplos-conference.org\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP](https:\u002F\u002Fdl.acm.org\u002Fconference\u002Fppopp)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM\u002FIEEE-blue.svg\" alt=\"ACM\u002FIEEE\" align=\"top\"> [International Symposium on Code Generation and Optimization, CGO](https:\u002F\u002Fdl.acm.org\u002Fconference\u002Fcgo)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM\u002FIEEE-blue.svg\" alt=\"ACM\u002FIEEE\" align=\"top\"> [International Conference on Parallel Architectures and Compilation Techniques, PACT](https:\u002F\u002Fdl.acm.org\u002Fconference\u002Fcgo)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [Object-oriented Programming, Systems, Languages, and Applications, OOPSLA](http:\u002F\u002Fwww.sigplan.org\u002FConferences\u002FOOPSLA\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [International Conference on Compiler Construction, CC](https:\u002F\u002Fconf.researchr.org\u002Fseries\u002FCC)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [International Conference on Supercomputing, ICS](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fics\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [International Conference on High Performance and Embedded Architectures and Compilers, HiPEAC](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fhipeac\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [International Conference on Languages, Compilers and Tools for Embedded Systems, LCTES](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Flctrts\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [International Conference on Computing Frontiers, CF](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fcf)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [International Parallel and Distributed Processing Symposium, IPDPS](http:\u002F\u002Fwww.ipdps.org\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [International Conference for High Performance Computing, Networking, Storage, and Analysis, SC](http:\u002F\u002Fsupercomputing.org\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWorkshop-Academic-blue.svg\" alt=\"Workshop\" align=\"top\"> [Machine Learning and Programming Languages Workshop, MAPL](https:\u002F\u002Fpldi20.sigplan.org\u002Fseries\u002Fmapl)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWorkshop-Academic-blue.svg\" alt=\"Workshop\" align=\"top\"> [Languages and Compilers for Parallel Computing, LCPC](https:\u002F\u002Fdblp.org\u002Fdb\u002Fconf\u002Flcpc\u002Findex)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-Academic-blue.svg\" alt=\"Academic\" align=\"top\"> [International Conference on Learning Representations, ICLR](https:\u002F\u002Fdblp1.uni-trier.de\u002Fdb\u002Fconf\u002Ficlr\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-Academic-blue.svg\" alt=\"Academic\" align=\"top\"> [Conference on Machine Learning and Systems, MLSys](https:\u002F\u002Fmlsys.org\u002F)\n\u003C!---- - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [EEE\u002FACM International Symposium on Microarchitecture, Micro](https:\u002F\u002Fdblp1.uni-trier.de\u002Fdb\u002Fconf\u002Fmicro\u002F)\n - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [International Conference on Compilers, Architectures, and Synthesis for Embedded Systems, CASES](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fcases\u002Findex.html) \n - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-USENIX-blue.svg\" alt=\"USENIX\" align=\"top\"> [USENIX Annul Technical Conference, ATC](https:\u002F\u002Fwww.usenix.org\u002Fconferences\u002Fbyname\u002F131) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-USENIX-blue.svg\" alt=\"USENIX\" align=\"top\"> [USENIX Symposium on Operating Systems Design and Implementation, OSDI](https:\u002F\u002Fdblp.org\u002Fdb\u002Fconf\u002Fosdi\u002Findex) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [International Conference on High Performance Computing, Data and Analytics, HiPC](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fhipc\u002Findex.html) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [International Conference on Virtual Execution Environments, VEE](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fvee\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [European Conference on Computer Systems, EuroSys](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Feurosys\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM Symposium on Parallelism in Algorithms and Architectures, SPAA](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fspaa\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IACC-blue.svg\" alt=\"IACC\" align=\"top\"> [International Conference on Parallel Processing, ICPP](http:\u002F\u002Fwww.wikicfp.com\u002Fcfp\u002Fprogram?id=1447&f=International%20Conference%20on%20Parallel%20Processing) \n - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM\u002FIFIP\u002FUSENIX-blue.svg\" alt=\"ACM\u002FIFIP\u002FUSENIX\" align=\"top\"> [International Middleware Conference, Middleware](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fmiddleware\u002F) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-Academic-blue.svg\" alt=\"ACM\" align=\"top\"> [European Conference on Parallel Processing, Euro-Par](http:\u002F\u002Fwww.wikicfp.com\u002Fcfp\u002Fprogram?id=967&f=European) --->\n\n## Journals\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FJournal-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM Transactions on Architecture and Code Optimization, TACO](https:\u002F\u002Fdl.acm.org\u002Fjournal\u002Ftaco)\n\n## How to Contribute\n\nSee [Contribution Guidelines](CONTRIBUTING.md). TL;DR: send one of the [maintainers](MAINTAINERS) a [pull request](https:\u002F\u002Fgithub.com\u002Fzwang4\u002Fawesome-machine-learning-in-compilers\u002Fpulls).\n","# 用于编译器和程序优化的优秀机器学习资源\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n[![维护中](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-YES-green.svg)](https:\u002F\u002Fgithub.com\u002Fzwang4\u002Fawesome-machine-learning-in-compilers\u002Fgraphs\u002Fcommit-activity)\n\n这是一份精心整理的列表，汇集了将机器学习技术应用于编译器和程序优化领域的优秀研究论文、数据集和工具。\n\n\n## 目录\n- [论文](#papers)\n   - [综述](#survey)\n   - [迭代编译与编译器选项调优](#iterative-compilation-and-compiler-option-tuning)\n   - [指令级优化](#instruction-level-optimisation)\n   - [并行化映射与任务调度](#parallelism-mapping-and-task-scheduling)\n   - [语言与编译](#languages-and-compilation)\n   - [自动调优与设计空间探索](#auto-tuning-and-design-space-exploration)\n   - [代码尺寸缩减](#code-size-reduction)\n   - [成本与性能模型](#cost-and-performance-models)\n   - [领域特定优化](#domain-specific-optimisation)\n   - [学习程序表示](#learning-program-representation)\n   - [面向编译器与系统优化的机器学习](#ml-for-compilers-and-systems-optimisation)\n   - [内存\u002F缓存建模\u002F分析](#memorycache-modelinganalysis)\n- [书籍](#books)\n- [讲座与教程](#talks-and-tutorials)\n- [软件](#software)\n- [基准测试与数据集](#benchmarks-and-datasets)\n- [会议](#conferences)\n- [期刊](#journals)\n- [如何贡献](#how-to-contribute)\n\n## 论文\n#### 综述\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F23-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [编译器优化中的机器学习](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fpieee18.pdf) - 郑 Wang 和 Michael O'Boyle，IEEE 会刊，2018年\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F43-pages-green.svg\" alt=\"43-pages\" align=\"top\"> [基于机器学习的编译器自动调优综述](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3197978) - Amir H. Ashouri、William Killian、John Cavazos、Gianluca Palermo 和 Cristina Silvano，ACM 计算综述 (CSUR)，2018年\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F43-pages-green.svg\" alt=\"43-pages\" align=\"top\"> [针对大型代码与自然性的机器学习综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06182) - Miltiadis Allamanis、Earl T. Barr、Premkumar Devanbu 和 Charles Sutton，ACM 计算综述 (CSUR)，2018年\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [面向系统问题的机器学习分类法](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?tp=&arnumber=9153088) - Martin Maas，IEEE Micro，2020年\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F34-pages-green.svg\" alt=\"34-pages\" align=\"top\"> [深度学习编译器：全面综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03794) - Mingzhen Li、Yi Liu、Xiaoyan Liu、Qingxiao Sun、Xin You、Hailong Yang、Zhongzhi Luan、Lin Gan、Guangwen Yang、Depei Qian，IEEE 并行与分布式系统汇刊，2021年\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F33-pages-green.svg\" alt=\"34-pages\" align=\"top\"> [新型编译器栈：LLMs 与编译器协同作用的综述](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs42514-025-00270-x) - Shuoming Zhang、Jiacheng Zhao、Qiuchu Yu、Chunwei Xia、郑 Wang、Xiaobing Feng、Huimin Cui，CCF 高性能计算汇刊，2026年\n\n#### 迭代编译与编译器选项调优\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [迈向高效的编译器自动调优：利用协同搜索空间](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696443.3708961) - 潘浩林、魏元宇、邢明杰、吴延军、赵晨。CGO 2025。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [SRTuner：通过揭示协同关系实现有效的编译优化定制](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9741263) - 朴成贤、萨拉尔·拉蒂菲、朴勇俊、阿曼德·贝赫鲁齐、全炳洙、斯科特·马尔克。CGO 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [基于度量学习和协同过滤的迭代编译优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Ffull\u002F10.1145\u002F3480250) - 刘洪志、罗杰、李颖、吴忠海。ACM TACO 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [贝叶斯优化优于随机搜索用于机器学习超参数调优：对2020年黑盒优化挑战赛的分析](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.10201.pdf) - 瑞安·特纳、大卫·埃里克森、迈克尔·麦考特、尤哈·基利、埃罗·拉克索宁、许振、伊莎贝尔·盖永。arXiv 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [Bliss：利用多样化轻量级学习模型池对复杂应用进行自动调优](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3453483.3454109) - RB Roy、T Patel、V Gadepally、D Tiwari。PLDI 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [通过贝叶斯优化实现高效的编译器自动调优](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1uc5d6xn3EUYXWVV8VFSdtfZ9eqvTL3k1\u002Fview) - 陈俊杰、徐宁欣、陈沛琪、张宏宇。ICSE 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [针对LLVM\u002FPolly可组合循环优化变换的定制蒙特卡洛树搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.04555) - 郜在勋、普拉桑纳·巴拉普拉卡什、迈克尔·克鲁泽、吴兴富、保罗·霍夫兰、玛丽·霍尔。Arxiv.org，2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [基本块重排序的改进](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.04676.pdf) - 安迪·纽厄尔和谢尔盖·普皮列夫。IEEE计算机汇刊，2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [基于深度强化学习的静态神经网络编译优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.08951) - 拉希姆·马马德利、阿里·詹内萨里、费利克斯·沃尔夫。LLVM HPC研讨会，2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [循环变换的自动调优搜索空间](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.04676.pdf) - 微克·克鲁泽、哈尔·芬克尔、吴兴富。LLVM HPC研讨会，2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [用于编译器优化自动调优的协同过滤方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3372799.3394361) - 斯特凡诺·切雷达、詹卢卡·帕莱尔莫、保罗·克雷莫内西和斯特凡诺·多尼。LCTES 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Autophase：利用深度强化学习进行高层次综合中的编译阶段排序](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2020\u002Ffile\u002F4e732ced3463d06de0ca9a15b6153677-Paper.pdf)。阿米尔·哈吉-阿里、黄启静、威廉·摩西、约翰·向、伊恩·斯托伊卡、克尔斯泰·阿萨诺维奇、约翰·瓦夫日涅克。MLSys 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [FuncyTuner：使用逐循环编译对科学应用进行自动调优](https:\u002F\u002Farcb.csc.ncsu.edu\u002F~mueller\u002Fftp\u002Fpub\u002Fmueller\u002Fpapers\u002Ficpp19.pdf) - 王涛、尼基尔·贾因、戴维·贝金赛尔、戴维·博梅、弗兰克·穆勒、托德·甘布林。ICPP 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F21-pages-green.svg\" alt=\"21-pages\" align=\"top\"> [Micomp：利用优化子序列和机器学习缓解编译阶段顺序问题](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F93751619.pdf) - 阿米尔·H·阿舒里、安德烈亚·比尼奥利、詹卢卡·帕莱尔莫、克里斯蒂娜·西尔瓦诺、萨米尔·库尔卡尼和约翰·卡瓦佐斯。ACM体系结构与代码优化汇刊（TACO）2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [多面体模型中并行化的迭代调度优化](https:\u002F\u002Fwww.infosun.fim.uni-passau.de\u002Fpublications\u002Fdocs\u002FGGS+17.pdf) - 斯特凡·甘瑟、阿明·格罗斯林格、诺伯特·齐格蒙德、斯文·阿佩尔和克里斯蒂安·伦高。ACM体系结构与代码优化汇刊（TACO），2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [学习如何对程序进行超级优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01787v3) - 鲁迪·布内尔、阿尔班·德斯梅松、M·帕万·库马尔、菲利普·H.S. 托尔、普什米特·科利姆。ICLR 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [编译器启发式规则的持续学习](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F2400682.2400705) - 米凯莱·塔塔拉和斯特凡诺·克雷斯皮·雷吉齐。ACM体系结构与代码优化汇刊（TACO），2013。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [利用机器学习缓解编译优化阶段顺序问题](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Foopsla-2012.pdf) - 萨米尔·库尔卡尼和约翰·卡瓦佐斯。OOPSLA 2012。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [不同建模技术在迭代编译中的评估](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fcases-2011.pdf) - 朴恩静、萨米尔·库尔卡尼和约翰·卡瓦佐斯。CASES 2011。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [在1000个数据集上评估迭代优化](https:\u002F\u002Fusers.elis.ugent.be\u002F~leeckhou\u002Fpapers\u002Fpldi10.pdf) - 杨辰、黄渊杰、利文·埃克豪特、格里戈里·福尔辛、梁鹏、奥利维尔·特马姆和吴承勇。PLDI 2010。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [多面体模型中的迭代优化：第二部分，多维时间](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fpldi-2008.pdf) - 路易斯-诺埃尔·普谢、塞德里克·巴斯图尔、阿尔贝·科恩和约翰·卡瓦佐斯。PLDI 2008。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Cole：编译优化层次探索](https:\u002F\u002Fusers.elis.ugent.be\u002F~leeckhou\u002Fpapers\u002Fcgo08.pdf) - 肯尼思·霍斯特和利文·埃克豪特。CGO 2008。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [MILEPOST GCC：基于机器学习的研究型编译器](http:\u002F\u002Fwww.fursin.net\u002Fpapers\u002Ffmtp2008.pdf) - 格里戈里·福尔辛、库比蒂诺·米兰达、奥利维尔·特马姆、米尔恰·纳莫拉鲁、埃拉德·约姆-托夫、阿亚尔·扎克斯、比尔哈·门德尔松等人，2008。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [启发式优化阶段顺序搜索算法的评估](http:\u002F\u002Fwww.cs.fsu.edu\u002F~whalley\u002Fpapers\u002Fcgo07.pdf) - J. W. 戴维森、加里·S·泰森、D. B. 韦利和P. A. 库尔卡尼。CGO 2007。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [利用性能计数器快速选择优秀的编译优化](http:\u002F\u002Febonilla.github.io\u002Fpapers\u002Fcavazos-et-al-cgo-2007.pdf) - 约翰·卡瓦佐斯、格里戈里·福尔辛、费利克斯·阿加科夫、埃德温·博尼利亚、迈克尔·FP·奥博伊尔和奥利维尔·特马姆。CGO 2007。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [利用机器学习聚焦迭代优化](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fbfranke\u002FPublications\u002Fcgo-2006.pdf) - 费利克斯·阿加科夫、埃德温·博尼利亚、约翰·卡瓦佐斯、比约恩·弗兰克、格里戈里·福尔辛、迈克尔·FP·奥博伊尔、约翰·汤姆森、马克·图桑和克里斯托弗·K.I. 威廉姆斯。CGO 2006。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [使用逻辑回归进行特定方法的动态编译](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.132.4370&rep=rep1&type=pdf) - 约翰·卡瓦佐斯和迈克尔·FP·奥博伊尔。OOPSLA 2005。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [利用监督分类预测展开因子](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.93.2788&rep=rep1&type=pdf) - 马克·斯蒂芬森和萨曼·阿马拉辛格。CGO 2005。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [高效优化阶段序列的快速搜索](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.93.2788&rep=rep1&type=pdf) - 普拉萨德·库尔卡尼、斯蒂芬·海因斯、杰森·希瑟、戴维·韦利、杰克·戴维森和道格拉斯·琼斯。PLDI 2004。\n\n#### 指令级优化\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [VEGA：利用预训练的Transformer模型自动生成编译器后端](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3696443.3708931) - 钟明、吕芳、王璐琳、邱磊、王莹莹、刘颖、崔慧敏、冯晓兵、薛静玲。CGO 2025。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [RL4ReAl：用于寄存器分配的强化学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.02013.pdf) - S. VenkataKeerthy、Siddharth Jain、Anilava Kundu、Rohit Aggarwal、Albert Cohen、Ramakrishna Upadrasta。CC 2023。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [强化学习辅助的局部性和向量化循环分布](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F365475992_Reinforcement_Learning_assisted_Loop_Distribution_for_Locality_and_Vectorization) - Shalini Jain、S. VenkataKeerthy、Rohit Aggarwal、Tharun Kumar Dangeti、Dibyendu Das、Ramakrishna Upadrasta。LLVM HPC Workshop 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [利用强化学习发现更快的矩阵乘法算法](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs41586-022-05172-4.pdf) - Fawzi、Alhussein、Matej Balog、Aja Huang、Thomas Hubert、Bernardino Romera-Paredes、Mohammadamin Barekatain、Alexander Novikov等。Nature 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [面向多面体优化的强化学习环境](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.13732.pdf) - Alexander Brauckmann、Andrés Goens、Jeronimo Castrillon。PACT，2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [用于深度学习代码优化的AI驱动编译器技术](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.05573.pdf) - Sanket Tavarageri、Gagandeep Goyal、Sasikanth Avancha、Bharat Kaul、Ramakrishna Upadrasta。Arxiv.org，2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [VeGen：面向SIMD及更广泛应用的向量化器生成器](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2021\u002Fvegen.pdf) - Yishen Chen、Charith Mendis、Michael Carbin、Saman Amarasinghe。ASPLOS 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [基于深度学习的混合图着色算法用于寄存器分配](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.03700) - Dibyendu Das、Shahid Asghar Ahmad、Kumar Venkataramanan。LLVM HPC Workshop，2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [NeuroVectorizer：基于深度强化学习的端到端向量化](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~krste\u002Fpapers\u002Fneurovectorizer-cgo2020.pdf) - Ameer Haj-Ali、Nesreen K. Ahmed、Ted Willke、Yakun Sophia Shao、Krste Asanovic、Ion Stoica。CGO 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [释放学习的力量：一种增强型基于学习的动态二进制翻译方法](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fatc19-song_0.pdf) - Song Changheng、Wang Wenwen、Pen-Chung Yew、Antonia Zhai、Zhang Weihua。USENIX ATC 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [使用模仿学习实现编译器自动向量化](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9604-compiler-auto-vectorization-with-imitation-learning.pdf) - Charith Mendis、Cambridge Yang、Yewen Pu、Saman P. Amarasinghe、Michael Carbin。NeurIPS 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F19-pages-green.svg\" alt=\"19-pages\" align=\"top\"> [针对二进制翻译中实际优化决策的多目标探索](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3358185) - Sunghyun Park、Youfeng Wu、Janghaeng Lee、Amir Aupov、Scott Mahlke。ACM Transactions on Embedded Computing Systems (TECS)，2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [利用机器学习自动构建内联启发式规则](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1109\u002FCGO.2013.6495004) - Sameer Kulkarni、John Cavazos、Christian Wimmer、Douglas Simon。CGO 2013。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [内联启发式规则的自动调优](http:\u002F\u002Fsc05.supercomputing.org\u002Fschedule\u002Fpdf\u002Fpap274.pdf) - John Cavazos和Michael O'Boyle。SC 2005。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [诱导启发式规则以决定是否进行调度](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fpldi-2004.pdf) - John Cavazos和J. Eliot B. Moss。PLDI 2003。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [元优化：用机器学习改进编译器启发式规则](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcag\u002Fmetaopt\u002Fpapers\u002Fmetaopt-pldi03.pdf) - Mark Stephenson、Saman Amarasinghe、Martin Martin、Una-May O'Reilly。PLDI 2003。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F7-pages-green.svg\" alt=\"7-pages\" align=\"top\"> [学习调度直线代码](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F1349-learning-to-schedule-straight-line-code.pdf) - J. Eliot B. Moss、Paul E. Utgoff、John Cavazos、Doina Precup、Darko Stefanovic、Carla E. Brodley、David Scheeff。NeurIPS 1998。\n\n#### 自动调优与设计空间探索\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"27-pages\" align=\"top\"> [推理编译器：LLM引导的高效模型推理优化]（NeurIPS 2025） - Annabelle Sujun Tang、Christopher Priebe、Rohan Mahapatra、Lianhui Qin、Hadi Esmaeilzadeh。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [Compiler-R1：基于强化学习的智能编译器自动调优]（NeurIPS 2025） - Haolin Pan、Hongyu Lin、Haoran Luo、Yang Liu、Kaichun Yao、Libo Zhang、Mingjie Xing、Yanjun Wu。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [pyATF：基于约束的Python自动调优工具]（CC 2025） - Richard Schulze、Sergei Gorlatch、Ari Rasch。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [IntelliGen：面向张量程序的指令级自动调优，结合单调性内存优化]（CGO 2025） - Zixuan Ma、Haojie Wang、Jingze Xing、Shuhong Huang、Liyan Zheng、Chen Zhang、Huanqi Cao、Kezhao Huang、Mingshu Zhai、Shizhi Tang、Penghan Wang、Jidong Zhai。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [面向张量计算的GPU内核加速自动调优]（ICS 2024） - Chendi Li、Yufan Xu、Sina Mahdipour Saravani、P. Sadayappan。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [通过自动化发现与优化揭示编译器启发式规则]（CGO 2024） - Volker Seeker、Chris Cummins、Murray Cole、Björn Franke、Kim Hazelwood、Hugh Leather。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F29-pages-green.svg\" alt=\"29-pages\" align=\"top\"> [用于内核调度的液滴搜索算法]（ACM TACO 2024） - Michael Canesche、Vanderson M. Rosario、Edson Borin、Fernando Magno Quintão Pereira。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F24-pages-green.svg\" alt=\"24-pages\" align=\"top\"> [BaCO：一种快速且可移植的贝叶斯编译器优化框架]（ASPLOS 2024） - Erik Hellsten、Artur Souza、Johannes Lenfers、Rubens Lacouture、Olivia Hsu、Adel Ejjeh、Fredrik Kjolstad、Michel Steuwer、Kunle Olukotun、Luigi Nardi。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [(反\u002F再)组合的系统化表达：基于MDH的调度方法]（CC 2023） - Ari Rasch、Richard Schulze、Denys Shabalin、Anne Elster、Sergei Gorlatch、Mary Hall。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F23-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [卷积的自动调优比你想象的更简单]（ACM TACO 2022） - Nicolas Tollenaere、Guillaume Iooss、Stéphane Pouget、Hugo Brunie、Christophe Guillon、Albert Cohen、P. Sadayappan、Fabrice Rastello。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [迁移调优：复用自动调度以高效生成张量程序代码]（PACT 2022） - Perry Gibson、Jose Cano。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [Glimpse：面向神经网络编译的硬件规范数学嵌入]（DAC 2022） - Byung Hoon Ahn、Sean Kinzer、Hadi Esmaeilzadeh。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [深度学习编译器的一次性调优器]（CC 2022） - Jaehun Ryu、Eunhyeok Park、Hyojin Sung。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [多遍机器学习编译器的灵活自动调优方法]（PACT 2021） - Phitchaya Mangpo Phothilimthana、Amit Sabne、Nikhil Sarda、Karthik Srinivasa Murthy、Yanqi Zhou、Christof Angermueller、Mike Burrows、Sudip Roy、Ketan Mandke、Rezsa Farahani、Yu Emma Wang、Berkin Ilbeyi、Blake Hechtman、Bjarke Roune、Shen Wang、Yuanzhong Xu、Samuel J. Kaufman。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [TASO：通过图替换的自动生成优化深度学习计算]（PACT 2021） - Zhihao Jia、Oded Padon、James Thomas、Todd Warszawski、Matei Zaharia、Alex Aiken。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [用于深度神经网络工作负载吞吐量优化的价值学习]（MLSys 2021） - Benoit Steiner、Chris Cummins、Horace He、Hugh Leather。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [DynaTune：深度神经网络编译中的动态张量程序优化]（ICLR 2021） - Minjia Zhang、Menghao Li、Chi Wang、Mingqin Li。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [利用进化图强化学习优化内存布局]（ICLR 2021） - Shauharda Khadka、Estelle Aflalo、Mattias Mardar、Avrech Ben-David、Santiago Miret、Shie Mannor、Tamir Hazan、Hanlin Tang、Somdeb Majumdar。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [GPTune：面向百亿亿级应用的多任务学习自动调优]（PPoPP 2021） - Yang Liu、Wissam M. Sid-Lakhdar、Osni Marques、Xinran Zhu、Chang Meng、James W. Demmel、Xiaoye S. Li。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [ApproxTuner：用于自适应近似的编译器与运行时系统]（PPoPP 2021） - Hashim Sharif、Yifan Zhao、Maria Kotsifakou、Akash Kothari、Ben Schreiber、Elizabeth Wang、Yasmin Sarita、Nathan Zhao、Keyur Joshi、Vikram S. Adve、Sasa Misailovic、Sarita Adve。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [通过自动调优框架（ATF）高效调优具有相互依赖调优参数的并行程序]（ACM TACO 2021） - Ari Rasch、Richard Schulze、Michel Steuwer、Sergei Gorlatch。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [变色龙：用于加速深度神经网络编译的自适应代码优化]（ICLR 2020） - Byung Hoon Ahn、Prannoy Pilligundla、Amir Yazdanbakhsh、Hadi Esmaeilzadeh。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [Ansor：为深度学习生成高性能张量程序]（OSDI 2020） - Lianmin Zheng、Chengfan Jia、Minmin Sun、Zhao Wu、Cody Hao Yu、Ameer Haj-Ali、Yida Wang、Jun Yang、Danyang Zhuo、Koushik Sen、Joseph E. Gonzalez、Ion Stoica。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [基于模式的GPU图处理算法自动调优器]（PPoPP 2019） - Ke Meng、Jiajia Li、Guangming Tan、Ninghui Sun。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [FBNet：通过可微架构搜索实现硬件感知的高效卷积网络设计]（CVPR 2019） - Bichen Wu、Xiaoliang Dai、Peizhao Zhang、Yanghan Wang、Fei Sun、Yiming Wu、Yuandong Tian、Peter Vajda、Yangqing Jia、Kurt Keutzer。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [TVM：面向深度学习的自动化端到端优化编译器]（OSDI 2018） - Tianqi Chen、Thierry Moreau、Ziheng Jiang、Lianmin Zheng、Eddie Yan、Haichen Shen、Meghan Cowan等。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [BOAT：使用结构化贝叶斯优化构建自动调优器]（WWW 2017） - Valentin Dalibard、Michael Schaarschmidt、Eiko Yoneki。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [Cobayn：基于贝叶斯网络的编译器自动调优框架]（TACO 2016） - Amir H. Ashouri、Giovanni Mariani、Gianluca Palermo、Eunjung Park、John Cavazos、Cristina Silvano。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [针对输入敏感性的自动调优算法选择]（PLDI 2015） - Yufei Ding、Jason Ansel、Kalyan Veeramachaneni、Xipeng Shen、Una-May O'Reilly、Saman Amarasinghe。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [Fast：基于最优解空间模型的快速模板自动调优框架]（TACO 2015） - Yulong Luo、Guangming Tan、Zeyao Mo、Ninghui Sun。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [使用回归树优化GPU性能与功耗]（SC 2015） - Wenhao Jia、Elba Garza、Kelly A. Shaw、Margaret Martonosi。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [基于强化学习的多核系统应用间及应用内热优化，以提升寿命]（DAC 2014） - Anup K Das、Rishad Ahmed Shafik、Geoff V Merrett、Bashir M Al-Hashimi、Akash Kumar、Bharadwaj Veeravalli。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Opentuner：可扩展的程序自动调优框架]（PACT 2014） - Jason Ansel、Shoaib Kamil、Kalyan Veeramachaneni、Jonathan Ragan-Kelley、Jeffrey Bosboom、Una-May O'Reilly、Saman Amarasinghe。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [用自动调优驯服并行I\u002FO复杂度]（SC 2013） - Babak Behzad、Huong Vu Thanh Luu、Joseph Huchette、Surendra Byna、Ruth Aydt、Quincey Koziol、Marc Snir。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [面向并行代码的多目标自动调优框架]（SC 2012） - Herbert Jordan、Peter Thoman、Juan J. Durillo、Simone Pellegrini、Philipp Gschwandtner、Thomas Fahringer、Hans Moritsch。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F8-pages-green.svg\" alt=\"8-pages\" align=\"top\"> [基于赌徒问题的图优化，应用于库性能调优]（ICML 2009） - Frédéric De Mesmay、Arpad Rimmel、Yevgen Voronenko、Markus Püschel。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [结合模型与指导性经验搜索，优化多层次内存层次结构]（CGO 2005） - Chun Chen、Jacqueline Chame、Mary Hall。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [主动协调：迈向自动化性能调优]（SC 2002） - Cristian Tapus、I-Hsin Chung、Jeffrey K. Hollingsworth。\n\n#### 并行性映射与任务调度\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [用于源代码分类的卷积神经网络模型探索](https:\u002F\u002Fdoi.org\u002F10.1016\u002Fj.engappai.2020.104075) - Francesco Barchi、Emanuele Parisi、Gianvito Urgese、Elisa Ficarra 和 Andrea Acquaviva。《人工智能工程应用》，2021年1月。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [自动驾驶：谷歌的工作负载自动伸缩](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3342195.3387524) - Krzysztof Rzadca、Pawel Findeisen、Jacek Swiderski、Przemyslaw Zych、Przemyslaw Broniek、Jarek Kusmierek、Pawel Nowak、Beata Strack、Piotr Witusowski、Steven Hand、John Wilkes。EuroSys 2020。[幻灯片](https:\u002F\u002Fwww.eurosys2020.org\u002Fwp-content\u002Fuploads\u002F2020\u002F04\u002Fslides\u002F149_rzadca_slides.pdf)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [利用机器学习建模和优化 NUMA 效应与预取](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3392717.3392765) - Isaac Sánchez Barrera、David Black-Schaffer、Marc Casas、Miquel Moretó、Anastasiia Stupnikova 和 Mihail Popov。ICS 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [Poise：利用机器学习平衡 GPU 中线程级并行性和内存系统性能](https:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fvnagaraj\u002Fpapers\u002Fhpca19.pdf) - Saumay Dublish、Vijay Nagarajan 和 Nigel Tophama。HPCA 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [NUMA 架构中的数据与线程放置：基于统计学习的方法](https:\u002F\u002Fwww.mcs.anl.gov\u002Fresearch\u002Fprojects\u002Fargo\u002Fpublications\u002F2019-icpp-denoyelle.pdf) - Nicolas Denoyelle、Brice Goglin、Emmanuel Jeannot 和 Thomas Ropars。ICPP 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [使用深度学习和 LLVM-IR 在异构平台上进行代码映射](https:\u002F\u002Firis.polito.it\u002Fretrieve\u002Fhandle\u002F11583\u002F2726074\u002F327896\u002Fdocument_post_print.pdf) - Francesco Barchi、Gianvito Urgese、Enrico Macii 和 Andrea Acquaviva。DAC 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [嵌入式异构系统上 OpenCL 程序的适应性优化](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F83920402.pdf) - Ben Taylor、Vicent Sanz Marco 和 Zheng Wang。LCTES 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [通过内存感知的任务共置提升 Spark 应用程序吞吐量：专家混合方法](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fmiddleware17.pdf) - Vicent Sanz Marco、Ben Taylor、Barry Porter 和 Zheng Wang。Middleware 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [CPU\u002FGPU 异构平台上 OpenCL 程序的智能多任务调度](http:\u002F\u002Fwww.lancaster.ac.uk\u002Fstaff\u002Fwangz3\u002Fpublications\u002Fhipc14.pdf) - Yuan Wen、Zheng Wang 和 Michael FP O'Boyle。HiPC 2015。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [Quasar：资源高效且面向 QoS 的集群管理](http:\u002F\u002Fcsl.stanford.edu\u002F~christos\u002Fpublications\u002F2014.quasar.asplos.pdf) - Christina Delimitrou 和 Christos Kozyrakis。ASPLOS 2014。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [面向基于 GPU 的异构系统的数据并行程序到 OpenCL 的自动且可移植映射](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fzheng_taco_2015.pdf) - Zheng Wang、Dominik Grewe 和 Michael O'Boyle。ACM 架构与代码优化汇刊（TACO），2014年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [集成基于性能剖析的并行性检测与基于机器学习的映射](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Ftaco14.pdf) - Zheng Wang、Georgios Tournavitis、Björn Franke 和 Michael FP O'Boyle。ACM 架构与代码优化汇刊（TACO），2014年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [异构架构上的可移植性能](https:\u002F\u002Fmangpo.net\u002Fpapers\u002Fpbgpu-asplos13.pdf) - Phitchaya Mangpo Phothilimthana、Jason Ansel、Jonathan Ragan-Kelley、Saman Amarasinghe。ASPLOS 2013。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [在外部工作负载存在时的智能自适应并行性映射](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1109\u002FCGO.2013.6495010) - Murali Krishna Emani、Zheng Wang 和 Michael O'Boyle。CGO 2013。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [面向多核的流式并行性划分：基于机器学习的方法](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fpact10.pdf) - Zheng Wang 和 Michael O'Boyle。PACT 2010。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Qilin：通过适应性映射在异构多处理器上挖掘并行性](http:\u002F\u002Fwww.sphong.net\u002FMICRO_2009.pdf) - Chi-Keung Luk、Sunpyo Hong 和 Hyesoon Kim。MICRO 2009。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [将并行性映射到多核：基于机器学习的方法](http:\u002F\u002Fllvm.org\u002Fpubs\u002F2009-02-PPoPP-MappingParallelism.pdf) - Zheng Wang 和 Michael O'Boyle。PPoPP 2009。\n\n#### 领域特定优化\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Seer: 用于不规则问题的预测性运行时内核选择](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10444812) - Ryan Swann, Muhammad Osama, Karthik Sangaiah, Jalal Mahmud. CGO 2024\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [基于概率程序的张量程序优化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.13603.pdf) - Junru Shao, Xiyou Zhou, Siyuan Feng, Bohan Hou, Ruihang Lai, Hongyi Jin, Wuwei Lin, Masahiro Masuda, Cody Hao Yu, Tianqi Chen. NeurIPS 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [moTuner: 一种基于编译器的混合精度算子自动调优方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3528416.3530231) - Zewei Mo, Zejia Lin, Xianwei Zhang, Yutong Lu. CF 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Collage: 深度学习后端的自动化集成](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3559009.3569651) - Byungsoo Jeon, Sunghyun Park, Peiyuan Liao, Sheng Xu, Tianqi Chen, Zhihao Jia. PACT 2022\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [使用门控连续逻辑网络学习非线性循环不变式](https:\u002F\u002Fwww.cs.columbia.edu\u002F~rgu\u002Fpublications\u002Fpldi20-yao.pdf) - J. Yao, G. Ryan, J. Wong, S. Jana 和 R. Gu。PLDI 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [面向 C++ 服务器工作负载的学习型内存分配](https:\u002F\u002Fwww.cs.utexas.edu\u002Fusers\u002Fmckinley\u002Fpapers\u002Fllama-asplos-2020.pdf) - Maas、Martin、David G. Andersen、Michael Isard、Mohammad Mahdi Javanmard、Kathryn S. McKinley 和 Colin Raffel。ASPLOS 2020。[演示视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=gs8m5W-xdDM&feature=emb_title)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [弥合深度学习与稀疏矩阵格式选择之间的鸿沟](https:\u002F\u002Fpeople.engr.ncsu.edu\u002Fxshen5\u002FPublications\u002Fppopp18.pdf) - Yue Zhao、Jiajia Li、Chunhua Liao 和 Xipeng Shen。PPoPP 2018。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [Camel: 面向移动Web交互的智能自适应能耗优化](http:\u002F\u002Feprints.whiterose.ac.uk\u002F155720\u002F1\u002Fpaper.pdf) - Jie Ren、Y. Lu、Petteri Nurmi、Xiaoming Wang、Miao Ma、Ling Gao、Zhanyong Tang、Jie Zheng 和 Zheng Wang。INFOCOM 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [利用遗传算法优化排序](http:\u002F\u002Fpolaris.cs.uiuc.edu\u002F~garzaran\u002Fdoc\u002Fcgo05.pdf) - Xiaoming Li、Maria Jesus Garzaran 和 David Padua。CGO 2005。\n\n#### 语言与编译\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [QiMeng-Xpiler: 基于神经符号方法为深度学习系统转编译张量程序](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi25-dong.pdf) - Shouyang Dong、Yuanbo Wen、Jun Bi、Di Huang、Jiaming Guo、Jianxing Xu、Ruibai Xu、Xinkai Song、Yifan Hao、Ling Li、Xuehai Zhou、Tianshi Chen、Qi Guo、Yunji Chen，OSDI 2025。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F74-pages-green.svg\" alt=\"74-pages\" align=\"top\"> [(去\u002F重)组合数据并行计算：通过多维同态映射实现](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3665643) - Ari Rasch，TOPLAS 2024。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [Halide: 用于优化图像处理流水线中并行性、局部性和重计算的语言及编译器](https:\u002F\u002Fcore.ac.uk\u002Fdownload\u002Fpdf\u002F20024748.pdf) - Jonathan Ragan-Kelley、Connelly Barnes、Andrew Adams、Sylvain Paris、Frédo Durand 和 Saman Amarasinghe，PLDI 2013。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [PetaBricks: 用于算法选择的语言及编译器](http:\u002F\u002Fpeople.csail.mit.edu\u002Fcychan\u002Fpapers\u002F2009pldi-petabricks.pdf) - Jason Ansel、Cy Chan、Yee Lok Wong、Marek Olszewski、Qin Zhao、Alan Edelman 和 Saman Amarasinghe。PLDI 2009。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F29-pages-green.svg\" alt=\"29-pages\" align=\"top\"> [以函数式方式实现高性能：将高性能优化表示为重写策略的函数式珍珠](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3408974) - Bastian Hagedorn、Johannes Lenfers、Thomas K{\\oe}hler、Xueying Qin、Sergei Gorlatch 和 Michel Steuwer。ACM 编程语言会议论文集 2020。\n\n#### 代码大小缩减\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [利用子集和归一化值预测学习编译器优化顺序](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.05104.pdf) - 梁友伟、凯文·斯通、阿里·沙梅利、克里斯·卡明斯、穆斯塔法·埃尔侯希、郭嘉东、贝努瓦·施泰纳、杨晓萌、谢鹏涛、休·莱瑟、田元东。ICML 2023。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [POSET-RL：基于强化学习的阶段排序以优化代码大小和执行时间](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9804673) - 沙利尼·贾因、亚沙斯·安达卢里、S·文卡塔基尔蒂、拉马克里什纳·乌帕德拉斯塔。ISPASS 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [探索用于代码大小缩减的优化序列空间：见解与工具](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3446804.3446849) - 安德森·法乌斯蒂诺·达·席尔瓦、贝尔纳多·N·B·德·利马、费尔南多·马格诺·昆塔奥·佩雷拉。CC 2021。[代码与数据](https:\u002F\u002Fzenodo.org\u002Frecord\u002F4416117)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [利用机器学习预测动态编译器中复制启发式对代码大小的影响](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3475738.3480943?sid=SCITRUS) - 拉斐尔·莫萨内尔、大卫·利奥波尔德塞德尔、卢卡斯·施塔德勒、汉斯彼得·莫斯恩博克。MPLR 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [ANGHABENCH：包含一百万个可编译C基准测试的套件，用于代码大小缩减](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FFaustinoCGO21.pdf) - 安德森·法乌斯蒂诺·达·席尔瓦、布鲁诺·孔德·金德、若泽·韦斯利·德·索萨·马加良埃斯、杰罗尼莫·努内斯·罗沙、布雷诺·坎波斯·费雷拉·吉马良斯、费尔南多·马格诺·昆塔奥·佩雷拉。CGO 2021。[代码与数据](http:\u002F\u002Fcuda.dcc.ufmg.br\u002Fangha\u002Fhome)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F7-pages-green.svg\" alt=\"7-pages\" align=\"top\"> [强化学习引导的软件精简](http:\u002F\u002Fwww.csl.sri.com\u002Fusers\u002Fgehani\u002Fpapers\u002FMLSys-2019.DeepOCCAM.pdf) - 农咸黎文、阿希什·盖哈尼、阿里耶·古尔芬克尔、苏斯米特·贾、豪尔赫·A·纳瓦斯。MLSys 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [利用遗传算法优化以减少代码空间](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.13.1586&rep=rep1&type=pdf) - 凯斯·D·库珀、菲利普·J·希埃尔克、德维卡·苏布拉马尼安。LCTES 1999。\n\n#### 成本与性能模型\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [TLP：基于深度学习的张量程序调优成本模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.03578) - 翟毅、张宇、刘硕、楚晓萌、彭杰、季建民、张燕勇，ASPLOS，2023年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Performance-Detective：自动推导廉价且准确的性能模型](https:\u002F\u002Fmcopik.github.io\u002Fassets\u002Fpdf\u002F2022_ics_schmid_perf_detective.pdf) - 拉里萨·施密德、马尔钦·科皮克、亚历山德鲁·卡洛图伊、多米尼克·韦尔勒、安德烈亚斯·赖特、迈克尔·塞尔策、安妮·科齐奥莱克、托斯滕·霍夫勒，ICS，2022年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [基于神经网络的任务迁移性能预测——面向S-NUCA众核架构](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9190026) - 马丁·拉普、阿努杰·帕塔尼亚、图莉卡·米特拉、约格·亨克尔，IEEE计算机汇刊，2021年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [基于深度学习的自动代码优化成本模型](https:\u002F\u002Fproceedings.mlsys.org\u002Fpaper\u002F2021\u002Ffile\u002F3def184ad8f4755ff269862ea77393dd-Paper.pdf) - 里亚德·巴格达迪、马西尼萨·梅鲁瓦尼、穆罕默德-希查姆·莱盖塔斯、卡迈勒·阿卜杜斯、塔哈·阿尔鲍伊、卡里玛·贝纳奇巴、萨曼·阿马拉辛格，MLSys 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [利用深度学习进行代码结构比较分析以预测性能](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.07660) - 内森·皮诺、塔雷克·拉马丹、坦齐玛·Z·伊斯兰、蔡斯·菲尔普斯、贾亚拉曼·J·蒂亚加拉詹，ISPASS 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [从污染程序中提取纯净性能模型](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9139798) - 马尔钦·科皮克、亚历山德鲁·卡洛图伊、托比亚斯·格罗瑟、尼古拉斯·维基、费利克斯·沃尔夫、托斯滕·霍夫勒。PPoPP 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F15-pages-green.svg\" alt=\"15-pages\" align=\"top\"> [PMEvo：通过进化优化实现对乱序处理器端口映射的可移植推断](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10044.pdf) - 法比安·里特、塞巴斯蒂安·哈克。PLDI 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [用于性能调优中经验建模的主动学习方法](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9139798) - 张洁鹏、孙静伟、周文举、孙广忠。IPDPS 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [使用树搜索和随机程序学习优化Halide](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3306346.3322967) - 安德鲁·亚当斯、卡里玛·马、卢克·安德森、里亚德·巴格达迪、李祖茂、迈克尔·加尔比、贝努瓦·施泰纳、史蒂文·约翰逊、凯文·法塔哈利安、弗雷多·杜兰、乔纳森·拉根-凯利。ACM图形学汇刊，2019年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [Ithemal：利用深度神经网络实现准确、可移植且快速的基本块吞吐量估计](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fmendis19a\u002Fmendis19a.pdf) - 查里思·门迪斯、亚历克斯·伦达、萨曼·阿马拉辛格和迈克尔·卡宾。ICML 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [Absinthe：学习解析型性能模型，一次性融合并分块模板代码](http:\u002F\u002Funixer.de\u002Fpublications\u002Fimg\u002Fgysi-absinthe.pdf) - 托比亚斯·吉西、托比亚斯·格罗瑟和托斯滕·霍夫勒。PACT 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F22-pages-green.svg\" alt=\"22-pages\" align=\"top\"> [通过分析公开数据集预测新工作负载或CPU性能](https:\u002F\u002Fyuemmawang.github.io\u002Fpublications\u002Fwang-taco2019.pdf) - 王宇、维克多·李、顾延伟和戴维·布鲁克斯。ACM体系结构与代码优化汇刊（TACO），2019年。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [自动创建分块大小选择模型](http:\u002F\u002Fpeople.rennes.inria.fr\u002FTomofumi.Yuki\u002Fpapers\u002Fyuki-cgo2010.pdf) - 幸富智文、拉克什米纳拉亚南·雷加纳拉亚南、桑杰·拉乔帕迪耶、查尔斯·安德森、亚历山大·E·艾肯伯格和凯文·奥布莱恩。CGO 2010。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [面向编译器优化的微架构敏感经验模型](https:\u002F\u002Fwww.csa.iisc.ac.in\u002F~srikant\u002Fpapers-theses\u002Fkapil-CGO-2007.pdf) - 卡皮尔·瓦斯瓦尼、马修·J·塔祖塔维尔、Y·N·斯里坎特和P·J·约瑟夫。CGO 2007。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [用于程序优化的精确静态估算器](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F178243.178251) - 蒂姆·A·瓦格纳、万斯·马弗里克、苏珊·L·格雷厄姆和迈克尔·A·哈里森。PLDI 1994。\n\n#### 学习型程序表示\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [性能嵌入：一种基于相似性的迁移调优方法用于性能优化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3533767.3534383) -  L Trümper, T Ben-Nun, P Schaad, A Calotoiu, T Hoefler. ICS 2023.\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [利用图对齐的表示学习改进跨平台二进制分析](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3533767.3534383) -  Geunwoo Kim,  Sanghyun Hong, Michael Franz,  Dokyung Song. ISSTA 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F46-pages-green.svg\" alt=\"46-pages\" align=\"top\"> [用于预测性编译的程序表示：21世纪初现状](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FFaustinoJCL22.pdf) - Anderson Faustino da Silva, Edson Borin, Fernando Magno Quintao Pereira, Nilton Luiz Queiroz Junior and Otavio Oliveira Napoli。JCL 2022。[代码与数据](https:\u002F\u002Fgithub.com\u002Fotavioon\u002FCOLA-2022-Tools)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [基于深度学习的比较代码结构分析用于性能预测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.07660.pdf) - DNathan Pinnow, Tarek Ramadan, Tanzima Z. Islam, Chase Phelps, Jayaraman J. Thiagarajan。ISPASS 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F18-pages-green.svg\" alt=\"18-pages\" align=\"top\"> [GraphCodeBERT：使用数据流预训练代码表示](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.08366.pdf) - Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie LIU, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou。ICLR 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [CodeBERT：面向编程和自然语言的预训练模型](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002F2020.findings-emnlp.139.pdf) - Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou。EMNLP 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"27-pages\" align=\"top\"> [IR2VEC：基于LLVM IR的可扩展程序嵌入](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3418463) - S. VenkataKeerthy, Rohit Aggarwal, Shalini Jain, Maunendra Sankar Desarkar, Ramakrishna Upadrasta 和 Y. N. Srikant。TACO 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [通过多关系图学习进行深度程序结构建模](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fpact20.pdf) - Guixin Ye, Zhanyong Tang, Huanting Wang, Jianbin Fang, Songfang Huang 和 Zheng Wang。PACT 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [源代码的全局关系模型](https:\u002F\u002Fopenreview.net\u002Fpdf?id=B1lnbRNtwr) - Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, David Bieber，ICLR 2020。（[数据与代码](https:\u002F\u002Fgithub.com\u002FVHellendoorn\u002FICLR20-Great)）\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F45-pages-green.svg\" alt=\"45-pages\" align=\"top\"> [使用图区间神经网络学习语义程序嵌入](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.09997.pdf) - Yu Wang, Ke Wang, Fengjuan Gao, 和 Linzhang Wang。OOPSLA 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F27-pages-green.svg\" alt=\"27-pages\" align=\"top\"> [Flow2Vec：基于值流的精确代码嵌入](https:\u002F\u002Fyuleisui.github.io\u002Fpublications\u002Foopsla20.pdf) - Yulei Sui, Xiao Cheng, Guanqin Zhang 和 Haoyu Wang。OOPSLA 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F23-pages-green.svg\" alt=\"23-pages\" align=\"top\"> [MISIM：端到端神经代码相似度系统](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.05265.pdf) - Fangke Ye, Shengtian Zhou, Anand Venkat, Ryan Marcus, Nesime Tatbul, Jesmin Jahan Tithi, Paul Petersen, Timothy Mattson, Tim Kraska, Pradeep Dubey, Vivek Sarkar 和 Justin Gottschlich 。arXiv 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [混合、精确的语义程序嵌入](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3385412.3385999) - Ke Wang 和 Zhendong Su。PLDI 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [LambdaNet：使用图神经网络的概率类型推断](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.02161.pdf) - Jiayi Wei, Maruth Goyal, Greg Durrett, 和 Isil Dillig。ICLR 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [基于编译器的图表示用于代码的深度学习模型](https:\u002F\u002Fcfaed.tu-dresden.de\u002Ffiles\u002FImages\u002Fpeople\u002Fchair-cc\u002Fpublications\u002F2002_Brauckmann_CC.pdf) - Alexander Brauckmann, Andrés Goens, Sebastian Ertel, 和 Jeronimo Castrillon。CC 2020。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F21-pages-green.svg\" alt=\"24-pages\" align=\"top\"> [使用图进行生成式代码建模](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.08490.pdf) - Marc Brockschmidt, Miltos Allamanis, Alexander L. Gaunt, 和 Oleksandr Polozov。ICLR 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F22-pages-green.svg\" alt=\"22-pages\" align=\"top\"> [code2seq：从代码的结构化表示中生成序列](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.01400.pdf) - Uri Alon, Shaked Brody, Omer Levy, 和 Eran Yahav。ICLR 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F29-pages-green.svg\" alt=\"29-pages\" align=\"top\"> [code2vec：学习代码的分布式表示](http:\u002F\u002Fwww.cs.technion.ac.il\u002F~mbs\u002Fpublications\u002Fcode2vec-popl19.pdf) - Uri Alon, Meital Zilberstein, Omer Levy, 和 Eran Yahav。POPL 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [COSET：评估神经程序嵌入的基准](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.11445.pdf) - Ke Wang, Mihai Christodorescu。arXiv 2019。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [用图表示程序的学习](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fwp-content\u002Fuploads\u002F2017\u002F11\u002FprogramGraphs.pdf) - Miltiadis Allamanis, Marc Brockschmidt, 和 Mahmoud Khademi。ICLR 2018。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [神经代码理解：代码语义的可学习表示](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7617-neural-code-comprehension-a-learnable-representation-of-code-semantics.pdf) - Tal Ben-Nun, Alice Shoshana Jakobovits, 和 Torsten Hoefler。NeurIPS 2018。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [优化启发式算法的端到端深度学习](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fhleather\u002Fpublications\u002F2017-deepopt-pact.pdf) - Chris Cummins, Pavlos Petoumenos, Zheng Wang, 和 Hugh Leather（[幻灯片](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fend-to-end-deep-learning-of-optimization-heuristics-pact-17)）。PACT 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F6-pages-green.svg\" alt=\"6-pages\" align=\"top\"> [语义感知的程序采样](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fwp-content\u002Fuploads\u002F2017\u002F11\u002Fnips_2017.pdf) - Pratiksha Thaker, Daniel Tarlow, 和 Marc Brockschmidt。NeurIPS 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F20-pages-green.svg\" alt=\"20-pages\" align=\"top\"> [DeepCoder：学习编写程序](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fuploads\u002Fprod\u002F2017\u002F03\u002Fmain.pdf) - Matej Balog, Alexander L. Gaunt, Marc Brockschmidt,\nSebastian Nowozin, 和 Daniel Tarlow。ICLR 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F7-pages-green.svg\" alt=\"7-pages\" align=\"top\"> [针对编程语言处理的树状结构卷积神经网络](http:\u002F\u002Fsei.pku.edu.cn\u002F~zhanglu\u002FDownload\u002FAAAI16.pdf) - Lili Mou, Ge Li, Lu Zhang, Tao Wang, 和 Zhi Jin。AAAI 2016。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [用于源代码极端摘要的卷积注意力网络](http:\u002F\u002Fproceedings.mlr.press\u002Fv48\u002Fallamanis16.pdf) - Miltos Allamanis, Hao Peng, 和 Charles Sutton。ICML 2016。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F9-pages-green.svg\" alt=\"9-pages\" align=\"top\"> [自然源代码的结构化生成模型](http:\u002F\u002Fproceedings.mlr.press\u002Fv32\u002Fmaddison14.pdf) - Chris Maddison 和 Daniel Tarlow。ICML 2014。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [利用基于图的程序表征进行预测建模](https:\u002F\u002Fwww.eecis.udel.edu\u002F~cavazos\u002Fcgo-2012.pdf) - Eunjung Park, John Cavazos, 和 Marco A. Alvarez。CGO 2011。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F11-pages-green.svg\" alt=\"11-pages\" align=\"top\"> [基于机器学习的优化编译中的自动特征生成](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fhleather\u002Fpublications\u002F2009_autofeatures_cgo.pdf) - Hugh Leather, Edwin Bonilla, 和 Michael O'Boyle。CGO 2009。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [基于游戏的框架用于比较程序分类器和规避者](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FCGO23_ThaisDamasio.pdf) - Thais Damasio, Michael Canesche, Vinicius Pacheco, Anderson Faustino da Silva, Marcus Botacin 和 Fernando Magno Quintao Pereira。CGO 2023。[代码与数据](https:\u002F\u002Fzenodo.org\u002Frecord\u002F7374649)\n   \n#### 用于编译器和系统优化的机器学习\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [DFA-Net：一种特定于编译器的神经架构，用于在数据流分析中实现稳健泛化](https:\u002F\u002Fcfaed.tu-dresden.de\u002Ffiles\u002FImages\u002Fpeople\u002Fchair-cc\u002Fpublications\u002F2503_Brauckmann_CC.pdf) - Alexander Brauckmann, Anderson Faustino da Silva, Gabriel Synnaeve, Michael FP O’Boyle, Jeronimo Castrillon, Hugh Leather。CC 2025。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [使用编译器引导的大语言模型进行还原分析，以实现以输入为中心的代码优化](https:\u002F\u002Fresearch.csc.ncsu.edu\u002Fpicture\u002Fpublications\u002Fpapers\u002Fpldi2025) - Xiangwei Wang, Xinning Hui, Chunhua Liao, Xipeng Shen。PLDI 2025。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F17-pages-green.svg\" alt=\"17-pages\" align=\"top\"> [提升部署时预测模型的鲁棒性，用于代码分析和优化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.00298.pdf) - Huanting Wang, Patrick Lenihan, Zheng Wang。CGO 2025。（[代码](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FPROM\u002F)）\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F19-pages-green.svg\" alt=\"19-pages\" align=\"top\"> [MLIR变换方言——你的编译器比你想象的更强大](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2409.03864.pdf) - Martin Paul Lücke, Oleksandr Zinenko, William S. Moses, Michel Steuwer, Albert Cohen。Arxiv 2024。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F33-pages-green.svg\" alt=\"33-pages\" align=\"top\"> [元大语言模型编译器：编译器优化的基础模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.02524.pdf) - Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, Hugh Leather。Arxiv 2024。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F13-pages-green.svg\" alt=\"13-pages\" align=\"top\"> [接下来的700项基于机器学习的编译器优化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.10800.pdf) - S. VenkataKeerthy, Siddharth Jain, Umesh Kalvakuntla, Pranav Sai Gorantla, Rajiv S Chitale, Eugene Brevdo, Albert Cohen, Mircea Trofin, Ramakrishna Upadrasta。CC 2024。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [BenchPress：深度主动基准生成器](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.06555.pdf) - Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather。PACT 2022（[代码](https:\u002F\u002Fgithub.com\u002Ffivosts\u002FBenchPress)）\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [自动化强化学习架构设计用于代码优化](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fcc22.pdf) - Huanting Wang, Zhanyong Tang, Cheng Zhang, Jiaqi Zhao, Chris Cummins, Hugh Leather, Zheng Wang。CC 2022（[代码](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FSUPERSONIC)）\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F14-pages-green.svg\" alt=\"14-pages\" align=\"top\"> [学习语义表示以验证硬件设计]([https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fcc22.pdf](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper\u002F2021\u002Ffile\u002Fc5aa65949d20f6b20e1a922c13d974e7-Paper.pdf)) - Shobha Vasudevan, Wenjie (Joe) Jiang, David Bieber, Rishabh Singh, hamid shojaei, C. Richard Ho, Charles Sutton。NeurIPS 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F43-pages-green.svg\" alt=\"43-pages\" align=\"top\"> [MLIR中的可组合与模块化代码生成：一种结构化且可重定向的张量编译器构建方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.03293.pdf) - Nicolas Vasilache, Oleksandr Zinenko, Aart J.C. Bik, Mahesh Ravishankar, Thomas Raoux, Alexander Belyaev, Matthias Springer, Tobias Gysi, Diego Caballero, Stephan Herhut, Stella Laurenzo, Albert Cohen。arXiV 2022。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [基于深度NLP的协同进化，用于从自然语言合成代码分析](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3446804.3446852) - Zifan Nan, Hui Guan, Xipeng Shen, Chunhua Liao。CC 2021。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [MLGO：一个由机器学习指导的编译器优化框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.04808.pdf) - Mircea Trofin, Yundi Qian, Eugene Brevdo, Zinan Lin, Krzysztof Choromanski, David Li。arXiv。[代码](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fml-compiler-opt)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F16-pages-green.svg\" alt=\"16-pages\" align=\"top\"> [更好地理解黑盒自动调优：存储系统的比较分析](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fatc18\u002Fatc18-cao.pdf) - Zhen Cao, Vasily Tarasov, Sachin Tiwari, 和 Erez Zadok。ATC 2018。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [为预测建模合成基准](https:\u002F\u002Fwww.pure.ed.ac.uk\u002Fws\u002Ffiles\u002F29479104\u002F2017_cgo_1.pdf) - Chris Cummins, Pavlos Petoumenos, Zheng Wang, 和 Hugh Leather（[幻灯片](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fsynthesizing-benchmarks-for-predictive-modelling-cgo-17)）。CGO 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F12-pages-green.svg\" alt=\"12-pages\" align=\"top\"> [利用主动学习最小化迭代编译的成本](http:\u002F\u002Fhomepages.inf.ed.ac.uk\u002Fhleather\u002Fpublications\u002F2017-minimitercomp-cgo.pdf) - William Ogilvie, Pavlos Petoumenos, Zheng Wang, 和 Hugh Leather。CGO 2017。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F28-pages-green.svg\" alt=\"28-pages\" align=\"top\"> [VESPA：用于二进制优化的静态剖析](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3485521) - Angelica Aparecida Moreira, Guilherme Ottoni, 和 Fernando Magno Quintao Pereira。OOPSLA 2021。[代码与数据](https:\u002F\u002Fzenodo.org\u002Frecord\u002F5502310)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F35-pages-green.svg\" alt=\"35-pages\" align=\"top\"> [在异构多核系统中通过程序输入的统计回归映射计算](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FJunioTECS21.pdf) - Junio Cezar Ribeiro Da Silva, Lorena Leao, Vinicius Petrucci, Abdoulaye Gamatie 和 Fernando Magno Quintao Pereira。TECS 2021。\n\n### 内存\u002F缓存建模\u002F分析\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F25-pages-green.svg\" alt=\"25-pages\" align=\"top\"> [利用深度强化学习优化内存映射](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.07440.pdf) - 王鹏明、米基塔·萨扎诺维奇、贝尔金·伊尔贝伊、皮查亚·芒坡·波提林塔纳、马尼什·普罗希特、韩杨泰、银武、王妙森、科斯敏·帕杜拉鲁、爱德华·勒昂、安东·热尔诺夫、朱利安·施里特维瑟、托马斯·于贝尔、罗伯特·通、保拉·库里洛维茨、基兰·米兰、奥里奥尔·维尼亚尔斯、丹尼尔·J·曼科维茨。arXiv 2023。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F10-pages-green.svg\" alt=\"10-pages\" align=\"top\"> [学习内存访问模式](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Fhashemi18a\u002Fhashemi18a.pdf) - 米拉德·哈舍米、凯文·斯韦斯基、杰米·A·史密斯、格兰特·艾尔斯、海纳·利茨、张继川、克里斯托斯·科齐拉基斯、帕尔塔萨拉蒂·兰加纳坦。ICML 2018\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F26-pages-green.svg\" alt=\"26-pages\" align=\"top\"> [静态度量隐式存储操作](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3280848) - 费尔南多·马格诺·昆塔奥·佩雷拉、吉尔赫梅·维埃拉·莱奥巴斯和阿卜杜拉耶·加马蒂。TACO 2019。[代码与数据](https:\u002F\u002Fwww.lirmm.fr\u002Fcontinuum-project\u002Fpages\u002Fs3a.html)\n\n## 书籍\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F118-pages-green.svg\" alt=\"118-pages\" align=\"top\"> [利用机器学习对编译器进行自动调优](https:\u002F\u002Flink.springer.com\u002Fbook\u002F10.1007\u002F978-3-319-71489-9) - 阿米尔·H·阿舒里、詹卢卡·帕莱尔莫、约翰·卡瓦佐斯和克里斯蒂娜·西尔瓦诺。Springer 2018。\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F377-pages-green.svg\" alt=\"377-pages\" align=\"top\"> [软件自动调优——从概念到最先进成果](https:\u002F\u002Fwww.springer.com\u002Fgp\u002Fbook\u002F9781441969347) - K Naono、K Teranishi、J Cavazos 和 R Suda。Springer 2010。\n\n## 报告与教程\n- 陈天奇等，[MLC：机器学习编译器](https:\u002F\u002Fmlc.ai\u002Findex.html)([GitHub](https:\u002F\u002Fgithub.com\u002Fmlc-ai\u002Fmlc-en))。OcotoML 2022。\n- 萨曼·阿马拉辛格，[编译器2.0：利用机器学习现代化编译器技术](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=a1w_NKDVdkI)。LCTES 2020。\n- 阿米尔·阿舒里，[利用机器学习进行编译器自动调优：最新综述](https:\u002F\u002Fyoutu.be\u002FxNixKfDxDZE) ([幻灯片](http:\u002F\u002Famirashouri.ca\u002Fresources\u002FAmir_CompileAutotuning_Talk_2019_Google.pdf))。米兰理工大学 2018年。\n\n## 软件\n- [PROM](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FPROM\u002F) - 一个Python工具包，用于帮助识别部署后机器学习模型的误预测（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2501.00298))。\n- [ML-Compiler-Bridge](https:\u002F\u002Fgithub.com\u002FIITH-Compilers\u002FML-Compiler-Bridge) - 用于连接编译器与机器学习模型的库，以实现基于机器学习的编译器优化（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2311.10800.pdf))。\n- [Supersonic](https:\u002F\u002Fgithub.com\u002FHuantWang\u002FSUPERSONIC) - 自动化强化学习架构设计（[论文](https:\u002F\u002Fzwang4.github.io\u002Fpublications\u002Fcc22.pdf))。\n- [CompilerGym](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FCompilerGym) - 用于编译器优化的强化学习环境（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.08267.pdf))。\n- [CodeBert](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FCodeBERT) - 面向编程语言的预训练深度神经网络模型（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.08155.pdf))。\n- [IR2Vec](https:\u002F\u002Fgithub.com\u002FIITH-Compilers\u002FIR2Vec) - 基于LLVM IR的程序嵌入，用于机器学习（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.06228.pdf))。\n- [programl](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002FProGraML) - LLVM和XLA IR的程序表示，用于机器学习（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.10536.pdf))。\n- [NeuroVectorizer](https:\u002F\u002Fgithub.com\u002Fintel\u002Fneuro-vectorizer) - 使用深度强化学习（RL）预测最佳向量化编译器指令（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.13639.pdf))。\n- [TVM](https:\u002F\u002Ftvm.apache.org\u002F) - 开源深度学习编译器栈，适用于CPU、GPU及专用加速器（[论文](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fosdi18-chen.pdf)；[幻灯片](https:\u002F\u002Fwww.usenix.org\u002Fsites\u002Fdefault\u002Ffiles\u002Fconference\u002Fprotected-files\u002Fosdi18_slides_chen.pdf))。\n- [MLC-LLM](https:\u002F\u002Fgithub.com\u002Fmlc-ai\u002Fmlc-llm) - 一个面向大型语言模型的机器学习编译器及高性能部署引擎（参考技术：[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.04296.pdf)、[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.13603.pdf)以及[TVM](https:\u002F\u002Ftvm.apache.org\u002F))。\n- [clgen](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002Fclgen) - 使用LSTM生成基准测试用例（[论文](https:\u002F\u002Fchriscummins.cc\u002Fpub\u002F2017-cgo.pdf)；[幻灯片](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fsynthesizing-benchmarks-for-predictive-modelling-cgo-17))。\n- [COBAYN](https:\u002F\u002Fgithub.com\u002Famirjamez\u002FCOBAYN) - 利用贝叶斯网络进行编译器自动调优（[论文](http:\u002F\u002Famirashouri.ca\u002Fresources\u002FCOBAYN-ashouri_taco16.pdf))。\n- [OpenTuner](https:\u002F\u002Fgithub.com\u002Fjansel\u002Fopentuner) - 用于构建领域特定多目标程序自动调优框架（[论文](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2014\u002Fansel-pact14-opentuner.pdf)；[幻灯片](http:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F2014\u002Fansel-pact14-opentuner-slides.pdf))。\n- [ONNX-MLIR](http:\u002F\u002Fonnx.ai\u002Fonnx-mlir\u002F) - ONNX模型在MLIR编译器基础设施中的表示与参考降级（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.08272.pdf))。\n- [IREE](https:\u002F\u002Fgithub.com\u002Fopenxla\u002Firee) - 一个可重定向的基于MLIR的机器学习编译器及运行时工具包。\n\n## 基准测试与数据集\n- [TenSet：用于机器学习张量编译器的大规模程序性能数据集](https:\u002F\u002Fgithub.com\u002Ftlc-pack\u002Ftenset) - 包含六个常用硬件平台的张量程序性能记录的数据集（[论文](https:\u002F\u002Fopenreview.net\u002Fpdf?id=aIfp8kLuvc9)）。\n- [阿尔伯塔大学为 SPEC CPU® 2017 基准测试套件提供的工作负载](https:\u002F\u002Fwebdocs.cs.ualberta.ca\u002F~amaral\u002FAlbertaWorkloadsForSPECCPU2017\u002F) - SPEC CPU2017 基准测试套件的附加工作负载。\n- [Project CodeNet](https:\u002F\u002Fgithub.com\u002FIBM\u002FProject_CodeNet) - 用 50 多种编程语言编写的代码样本，并附有代码大小、内存占用、CPU 运行时间及状态（通过\u002F错误类型）等标注信息。\n- [CodeXGLUE](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FCodeXGLUE) - 用于代码理解和生成的机器学习基准数据集（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.04664.pdf)）。\n- [ANGHABENCH](http:\u002F\u002Fcuda.dcc.ufmg.br\u002Fangha\u002Fbenchmarks) - 包含一百万个可编译 C 语言基准测试的套件（[论文](https:\u002F\u002Fhomepages.dcc.ufmg.br\u002F~fernando\u002Fpublications\u002Fpapers\u002FFaustinoCGO21.pdf)）。\n- [BHive](https:\u002F\u002Fgithub.com\u002Fithemal\u002Fbhive) - 用于验证 x86-64 基本块性能模型的基准测试套件和测量框架（[论文](https:\u002F\u002Fgroups.csail.mit.edu\u002Fcommit\u002Fpapers\u002F19\u002Fithemal-measurement.pdf)）。\n- [cBench](https:\u002F\u002Fctuning.org\u002Fwiki\u002Findex.php\u002FCTools:CBench) - 32 个 C 语言基准测试，附带数据集和驱动脚本。\n- [PolyBench](http:\u002F\u002Fweb.cs.ucla.edu\u002F~pouchet\u002Fsoftware\u002Fpolybench\u002F) - 30 个模板计算和线性代数基准测试，附带数据集和驱动脚本。另请参阅：[GPU 版本](https:\u002F\u002Fgithub.com\u002Fcavazos-lab\u002FPolyBench-ACC)、[预计算数据集](https:\u002F\u002Fgithub.com\u002Fstefanocereda\u002Fpolybench_data)（[论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3372799.3394361)）。\n- [DeepDataFlow](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002FProGraML\u002Fblob\u002Fmaster\u002Fprograml\u002FDocumentation\u002FDataflowDataset.md) - 46.9 万个 LLVM-IR 文件以及 86 亿条用于分类的数据流分析标签（[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.10536.pdf)）。\n- [devmap](https:\u002F\u002Fgithub.com\u002FChrisCummins\u002Fpaper-end2end-dl) - 650 个 OpenCL 基准测试特征及 CPU\u002FGPU 分类标签（[论文](https:\u002F\u002Fchriscummins.cc\u002Fpub\u002F2017-pact.pdf)；[演示文稿](https:\u002F\u002Fspeakerdeck.com\u002Fchriscummins\u002Fend-to-end-deep-learning-of-optimization-heuristics-pact-17)）。\n\n## 会议\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM SIGPLAN 编程语言设计与实现会议，PLDI](https:\u002F\u002Fwww.sigplan.org\u002FConferences\u002FPLDI\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [编程语言与操作系统架构支持会议，ASPLOS](https:\u002F\u002Fasplos-conference.org\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM SIGPLAN 并行编程原理与实践研讨会，PPoPP](https:\u002F\u002Fdl.acm.org\u002Fconference\u002Fppopp)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM\u002FIEEE-blue.svg\" alt=\"ACM\u002FIEEE\" align=\"top\"> [国际代码生成与优化研讨会，CGO](https:\u002F\u002Fdl.acm.org\u002Fconference\u002Fcgo)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM\u002FIEEE-blue.svg\" alt=\"ACM\u002FIEEE\" align=\"top\"> [国际并行架构与编译技术会议，PACT](https:\u002F\u002Fdl.acm.org\u002Fconference\u002Fcgo)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [面向对象编程、系统、语言及应用会议，OOPSLA](http:\u002F\u002Fwww.sigplan.org\u002FConferences\u002FOOPSLA\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [国际编译器构造会议，CC](https:\u002F\u002Fconf.researchr.org\u002Fseries\u002FCC)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [国际超级计算会议，ICS](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fics\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [国际高性能与嵌入式架构及编译器会议，HiPEAC](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fhipeac\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [国际嵌入式系统语言、编译器与工具会议，LCTES](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Flctrts\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [国际计算前沿会议，CF](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fcf)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [国际并行与分布式处理研讨会，IPDPS](http:\u002F\u002Fwww.ipdps.org\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [国际高性能计算、网络、存储与分析大会，SC](http:\u002F\u002Fsupercomputing.org\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWorkshop-Academic-blue.svg\" alt=\"Workshop\" align=\"top\"> [机器学习与编程语言研讨会，MAPL](https:\u002F\u002Fpldi20.sigplan.org\u002Fseries\u002Fmapl)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWorkshop-Academic-blue.svg\" alt=\"Workshop\" align=\"top\"> [并行计算语言与编译器会议，LCPC](https:\u002F\u002Fdblp.org\u002Fdb\u002Fconf\u002Flcpc\u002Findex)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-Academic-blue.svg\" alt=\"Academic\" align=\"top\"> [国际学习表示会议，ICLR](https:\u002F\u002Fdblp1.uni-trier.de\u002Fdb\u002Fconf\u002Ficlr\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-Academic-blue.svg\" alt=\"Academic\" align=\"top\"> [机器学习与系统会议，MLSys](https:\u002F\u002Fmlsys.org\u002F)\n\u003C!---- - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [EEE\u002FACM 国际微架构研讨会，Micro](https:\u002F\u002Fdblp1.uni-trier.de\u002Fdb\u002Fconf\u002Fmicro\u002F)\n - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [国际嵌入式系统编译器、架构与综合会议，CASES](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fcases\u002Findex.html) \n - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-USENIX-blue.svg\" alt=\"USENIX\" align=\"top\"> [USENIX 年度技术会议，ATC](https:\u002F\u002Fwww.usenix.org\u002Fconferences\u002Fbyname\u002F131) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-USENIX-blue.svg\" alt=\"USENIX\" align=\"top\"> [USENIX 操作系统设计与实现研讨会，OSDI](https:\u002F\u002Fdblp.org\u002Fdb\u002Fconf\u002Fosdi\u002Findex) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IEEE-blue.svg\" alt=\"IEEE\" align=\"top\"> [国际高性能计算、数据与分析会议，HiPC](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fhipc\u002Findex.html) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [国际虚拟执行环境会议，VEE](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fvee\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [欧洲计算机系统会议，EuroSys](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Feurosys\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM 算法与架构中的并行性研讨会，SPAA](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fspaa\u002F)\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-IACC-blue.svg\" alt=\"IACC\" align=\"top\"> [国际并行处理会议，ICPP](http:\u002F\u002Fwww.wikicfp.com\u002Fcfp\u002Fprogram?id=1447&f=International%20Conference%20on%20Parallel%20Processing) \n - \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-ACM\u002FIFIP\u002FUSENIX-blue.svg\" alt=\"ACM\u002FIFIP\u002FUSENIX\" align=\"top\"> [国际中间件会议，Middleware](http:\u002F\u002Fdblp.uni-trier.de\u002Fdb\u002Fconf\u002Fmiddleware\u002F) \n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FConference-Academic-blue.svg\" alt=\"ACM\" align=\"top\"> [欧洲并行处理会议，Euro-Par](http:\u002F\u002Fwww.wikicfp.com\u002Fcfp\u002Fprogram?id=967&f=European) --->\n\n## 期刊\n- \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FJournal-ACM-blue.svg\" alt=\"ACM\" align=\"top\"> [ACM 架构与代码优化事务，TACO](https:\u002F\u002Fdl.acm.org\u002Fjournal\u002Ftaco)\n\n## 如何贡献\n\n请参阅 [贡献指南](CONTRIBUTING.md)。简而言之：向 [维护者](MAINTAINERS) 发送一个 [拉取请求](https:\u002F\u002Fgithub.com\u002Fzwang4\u002Fawesome-machine-learning-in-compilers\u002Fpulls)。","# awesome-machine-learning-in-compilers 快速上手指南\n\n`awesome-machine-learning-in-compilers` 并非一个可直接安装运行的软件工具，而是一个**精选资源列表**（Curated List），汇集了将机器学习应用于编译器优化和程序优化的研究论文、数据集、工具和基准测试。\n\n本指南旨在帮助开发者快速利用该列表中的资源，搭建自己的研究或开发环境。\n\n## 环境准备\n\n由于该仓库主要包含文献链接和外部工具引用，使用前需准备以下基础环境以阅读文献、克隆代码或复现论文中的工具：\n\n*   **操作系统**: Linux (推荐 Ubuntu\u002FCentOS), macOS 或 Windows (配合 WSL2)。\n*   **版本控制**: `git` (用于克隆仓库及列表中提到的其他开源项目)。\n*   **文档阅读**: PDF 阅读器 (大部分论文为 PDF 格式)。\n*   **开发依赖** (针对列表中具体的 ML 编译器工具):\n    *   Python 3.8+\n    *   PyTorch 或 TensorFlow (根据具体论文要求)\n    *   LLVM \u002F GCC (编译器基础设施，多数工具基于此构建)\n    *   CUDA (如需使用 GPU 加速模型训练)\n\n## 获取资源\n\n### 1. 克隆仓库\n直接克隆该仓库到本地，以便离线浏览目录和链接。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fzwang4\u002Fawesome-machine-learning-in-compilers.git\ncd awesome-machine-learning-in-compilers\n```\n\n> **国内加速提示**: 如果访问 GitHub 较慢，可使用国内镜像源（如 Gitee 上的同步仓库，若有）或通过代理加速克隆：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Fmirror\u002Fawesome-machine-learning-in-compilers.git\n> ```\n> *(注：若 Gitee 无实时同步镜像，建议配置 Git 代理或使用 `hub.fastgit.org` 等加速服务)*\n\n### 2. 浏览分类资源\n进入目录后，打开 `README.md` 文件，根据需求查找对应分类：\n*   **入门综述**: 查看 `Papers -> Survey` 章节，推荐阅读 *Machine Learning in Compiler Optimisation (IEEE, 2018)*。\n*   **实战工具**: 查看 `Software` 章节，获取可运行的开源工具链接（如 `MILEPOST GCC`, `Autophase` 等）。\n*   **数据与基准**: 查看 `Benchmarks and Datasets` 章节，下载用于训练模型的数据集。\n\n## 基本使用示例\n\n由于这是一个资源索引，\"使用\"通常指**复现列表中某个具体工具**。以下以复现经典的 **MILEPOST GCC** (基于机器学习的编译器) 为例，展示如何利用该列表进行实践：\n\n### 步骤 1: 定位目标\n在 `README.md` 的 `Iterative Compilation` 或 `Software` 部分找到 *MILEPOST GCC* 的相关论文和链接。\n\n### 步骤 2: 获取源码\n根据列表提供的线索（通常指向 cTuning 基金会或相关 GitHub 仓库），克隆工具源码：\n\n```bash\n# 示例：获取相关的编译器插件或框架 (具体地址需参考 README 中的最新链接)\ngit clone https:\u002F\u002Fgithub.com\u002Fctuning\u002Fck-mlops.git\nck pull repo:ck-ml\n```\n\n### 步骤 3: 安装依赖并运行\n大多数现代 ML 编译器工具使用 Python 封装。进入具体项目目录后，通常执行以下标准操作：\n\n```bash\n# 创建虚拟环境\npython3 -m venv venv\nsource venv\u002Fbin\u002Factivate\n\n# 安装依赖\npip install -r requirements.txt\n\n# 运行示例脚本 (以典型的自动调优脚本为例)\npython run_autotuning.py --benchmark=corpus --model=rf\n```\n\n### 步骤 4: 探索更多\n回到 `awesome-machine-learning-in-compilers` 的 `README.md`，尝试其他类别：\n*   想研究 **LLM 与编译器结合**？跳转至 `The New Compiler Stack: A Survey on the Synergy of LLMs and Compilers`。\n*   想获取 **测试数据集**？跳转至 `Benchmarks and Datasets` 下载标准测试集（如 PolyBench, SPEC CPU）。\n\n---\n**提示**: 该列表的核心价值在于其**分类索引**。建议开发者先阅读 `Survey` 部分的综述论文建立理论框架，再根据 `Software` 部分寻找现成的代码库进行二次开发或实验复现。","某高性能计算团队正在为新一代 AI 芯片开发定制化编译器，急需通过自动调优技术挖掘硬件极致性能。\n\n### 没有 awesome-machine-learning-in-compilers 时\n- **文献检索如大海捞针**：团队成员需手动在各大会议（如 CGO、ASPLOS）和期刊中筛选“机器学习用于编译优化”的论文，耗时数周仍难以覆盖关键成果。\n- **技术路线盲目试错**：缺乏对迭代编译、指令级优化等细分领域的系统认知，导致在“搜索空间构建”等核心问题上重复造轮子，甚至选错算法方向。\n- **数据与工具链断裂**：找不到权威的基准测试集（Benchmarks）和开源数据集，无法复现前沿论文效果，模型训练因缺乏高质量数据而停滞。\n- **领域知识更新滞后**：错过关于大语言模型（LLM）与编译器协同的最新综述，未能及时将生成式 AI 引入代码表示学习，错失架构升级窗口。\n\n### 使用 awesome-machine-learning-in-compilers 后\n- **核心资源一键直达**：直接获取按“自动调优”、“并行映射”等场景分类的精选论文列表，半天内即可锁定如 SRTuner 等最适合当前芯片特性的顶会方案。\n- **研发路径清晰明确**：借助分类清晰的综述文章，快速理解不同优化策略的适用边界，避免了在无效搜索空间上的算力浪费，决策效率提升显著。\n- **实验环境快速搭建**：利用列表中提供的专用数据集和工具链接，迅速复现基准测试，将模型从理论验证到实际部署的周期从数月缩短至数周。\n- **前沿技术无缝衔接**：通过追踪列表中持续更新的\"LLM 与编译器协同”等最新研究，成功引入基于程序表示学习的新型优化器，大幅提升了代码生成质量。\n\nawesome-machine-learning-in-compilers 将原本分散孤立的学术资源转化为结构化的工程导航图，让编译器研发团队能站在巨人肩膀上加速创新。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzwang4_awesome-machine-learning-in-compilers_fb2b06ef.png","zwang4","Zheng Wang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzwang4_3ea28ca9.jpg","Professor at the School of Computer Science at the University of Leeds.","University of Leeds",null,"https:\u002F\u002Fzwang4.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fzwang4",1659,177,"2026-04-01T01:04:43","CC0-1.0",1,"","未说明",{"notes":91,"python":89,"dependencies":92},"该仓库是一个 curated list（精选列表），主要收集了关于将机器学习应用于编译器和程序优化的研究论文、数据集和工具的资源链接。它本身不是一个可执行的软件工具或框架，因此没有具体的运行环境、依赖库或硬件需求。用户需根据列表中引用的具体论文或工具去查询其各自的环境要求。",[],[13],[95,96,97,98,99,100,101,102,103,104,105],"machine-learning","compiler","optimisation","parallel-computing","parallel-programming","parallelism","parallelisation","artificial-intelligence","operating-systems","auto-tuning","multi-cores","2026-03-27T02:49:30.150509","2026-04-06T08:42:15.271721",[],[]]