[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-fengbintu--Neural-Networks-on-Silicon":3,"tool-fengbintu--Neural-Networks-on-Silicon":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":81,"stars":84,"forks":85,"last_commit_at":86,"license":81,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":94,"view_count":97,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":98,"updated_at":99,"faqs":100,"releases":128},190,"fengbintu\u002FNeural-Networks-on-Silicon","Neural-Networks-on-Silicon","This is originally a collection of papers on neural network accelerators. Now it's more like my selection of research on deep learning and computer architecture.","Neural-Networks-on-Silicon 是由香港科技大学助理教授涂峰斌维护的一份精选论文合集，聚焦于深度学习与计算机体系结构交叉领域的前沿研究，特别是 AI 芯片设计与硬件加速器方向。它最初是神经网络加速器相关论文的整理，现已扩展为涵盖 ISSCC、ISCA、MICRO 等顶级会议中值得关注的 AI 芯片研究成果，帮助读者快速掌握该领域的发展脉络。\n\n这个项目解决了研究人员在海量文献中筛选高质量、高相关性论文的难题，尤其适合关注 AI 硬件架构、芯片设计、边缘计算加速等方向的研究生、工程师和学术人员使用。虽然不提供代码或工具包，但它像一份精心编排的“技术地图”，引导用户深入关键工作，如 DianNao 系列、FPGA 加速优化等经典研究。\n\n独特之处在于其持续更新的会议论文年表（从 2014 至 2026），覆盖芯片设计全流程的重要会议，体现作者对行业趋势的敏锐把握。如果你正在探索如何让神经网络跑得更快、更省电、更贴近传感器，这份清单会是你不错的起点。","# Neural Networks on Silicon\n\nFengbin Tu is an Assistant Professor and the Associate Director of the Institute of Integrated Circuits and Systems at The Hong Kong University of Science and Technology, NSFC Excellent Young Scientist, and a core faculty member of the AI Chip Center for Emerging Smart Systems (ACCESS) under InnoHK. For more informantion about Dr. Tu, please refer to [his homepage](https:\u002F\u002Ffengbintu.github.io\u002F). Dr. Tu's main research interest is AI chip and system. This is an exciting field where fresh ideas come out every day, so he's collecting works on related topics. Welcome to join!\n\n## Table of Contents\n - [My Contributions](#my-contributions)\n - [Conference Papers](#conference-papers)\n   - 2014: [ASPLOS](#2014-asplos), [MICRO](#2014-micro)\n   - 2015: [ISCA](#2015-isca), [ASPLOS](#2015-asplos), [FPGA](#2015-fpga), [DAC](#2015-dac)\n   - 2016: [ISSCC](#2016-isscc), [ISCA](#2016-isca), [MICRO](#2016-micro), [HPCA](#2016-hpca), [DAC](#2016-dac), [FPGA](#2016-fpga), [ICCAD](#2016-iccad), [DATE](#2016-date), [ASPDAC](#2016-aspdac), [VLSI](#2016-vlsi), [FPL](#2016-fpl)\n   - 2017: [ISSCC](#2017-isscc), [ISCA](#2017-isca), [MICRO](#2017-micro), [HPCA](#2017-hpca), [ASPLOS](#2017-asplos), [DAC](#2017-dac), [FPGA](#2017-fpga), [ICCAD](#2017-iccad), [DATE](#2017-date), [VLSI](#2017-vlsi), [FCCM](#2017-fccm), [HotChips](#2017-hotchips)\n   - 2018: [ISSCC](#2018-isscc), [ISCA](#2018-isca), [MICRO](#2018-micro), [HPCA](#2018-hpca), [ASPLOS](#2018-asplos), [DAC](#2018-dac), [FPGA](#2018-fpga), [ICCAD](#2018-iccad), [DATE](#2018-date), [ASPDAC](#2018-aspdac), [VLSI](#2018-vlsi), [HotChips](#2018-hotchips)\n   - 2019: [ISSCC](#2019-isscc), [ISCA](#2019-isca), [MICRO](#2019-micro), [HPCA](#2019-hpca), [ASPLOS](#2019-asplos), [DAC](#2019-dac), [FPGA](#2019-fpga), [ICCAD](#2019-iccad), [ASPDAC](#2019-aspdac), [VLSI](#2019-vlsi), [HotChips](#2019-hotchips), [ASSCC](#2019-asscc)\n   - 2020: [ISSCC](#2020-isscc), [ISCA](#2020-isca), [MICRO](#2020-micro), [HPCA](#2020-hpca), [ASPLOS](#2020-asplos), [DAC](#2020-dac), [FPGA](#2020-fpga), [ICCAD](#2020-iccad), [VLSI](#2020-vlsi), [HotChips](#2020-hotchips)\n   - 2021: [ISSCC](#2021-isscc), [ISCA](#2021-isca), [MICRO](#2021-micro), [HPCA](#2021-hpca), [ASPLOS](#2021-asplos), [DAC](#2021-dac), [ICCAD](#2021-iccad), [VLSI](#2021-vlsi), [HotChips](#2021-hotchips)\n   - 2022: [ISSCC](#2022-isscc), [ISCA](#2022-isca), [MICRO](#2022-micro), [HPCA](#2022-hpca), [ASPLOS](#2022-asplos), [HotChips](#2022-hotchips)\n   - 2023: [ISSCC](#2023-isscc), [ISCA](#2023-isca), [MICRO](#2023-micro), [HPCA](#2023-hpca), [ASPLOS](#2023-asplos), [HotChips](#2023-hotchips)\n   - 2024: [ISSCC](#2024-isscc), [ISCA](#2024-isca), [MICRO](#2024-micro), [HPCA](#2024-hpca), [ASPLOS](#2024-asplos), [HotChips](#2024-hotchips)\n   - 2025: [ISSCC](#2025-isscc), [ISCA](#2025-isca), [MICRO](#2025-micro), [HPCA](#2025-hpca), [ASPLOS](#2025-asplos), [HotChips](#2025-hotchips)\n   - 2026: [HPCA](#2026-hpca)\n\n## My Contributions\nMy main research interest is AI chip and architecture. For more informantion about me and my research, you can go to [my homepage](https:\u002F\u002Ffengbintu.github.io\u002Fresearch\u002F).\n\n## Conference Papers\nThis is a collection of AI chip-related conference papers that interest me. \n\n### 2014 ASPLOS\n- **DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning.** (CAS, Inria)\n\n### 2014 MICRO\n- **DaDianNao: A Machine-Learning Supercomputer.** (CAS, Inria, Inner Mongolia University)\n\n### 2015 ISCA\n- **ShiDianNao: Shifting Vision Processing Closer to the Sensor.** (CAS, EPFL, Inria)\n\n### 2015 ASPLOS\n- **PuDianNao: A Polyvalent Machine Learning Accelerator.** (CAS, USTC, Inria)\n\n### 2015 FPGA\n- **Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks.** (Peking University, UCLA)\n\n### 2015 DAC\n- Reno: A Highly-Efficient Reconfigurable Neuromorphic Computing Accelerator Design. (Universtiy of Pittsburgh, Tsinghua University, San Francisco State University, Air Force Research Laboratory, University of Massachusetts.)\n- Scalable Effort Classifiers for Energy Efficient Machine Learning. (Purdue University, Microsoft Research)\n- Design Methodology for Operating in Near-Threshold Computing (NTC) Region. (AMD)\n- Opportunistic Turbo Execution in NTC: Exploiting the Paradigm Shift in Performance Bottlenecks. (Utah State University)\n\n### 2016 DAC\n- **DeepBurning: Automatic Generation of FPGA-based Learning Accelerators for the Neural Network Family.** (Chinese Academy of Sciences)\n  - *Hardware generator: Basic buliding blocks for neural networks, and address generation unit (RTL).*\n  - *Compiler: Dynamic control flow (configurations for different models), and data layout in memory.*\n  - *Simply report their framework and describe some stages.*\n- **C-Brain: A Deep Learning Accelerator that Tames the Diversity of CNNs through Adaptive Data-Level Parallelization.** (Chinese Academy of Sciences)\n- **Simplifying Deep Neural Networks for Neuromorphic Architectures.** (Incheon National University)\n- **Dynamic Energy-Accuracy Trade-off Using Stochastic Computing in Deep Neural Networks.** (Samsung, Seoul National University, Ulsan National Institute of Science and Technology)\n- **Optimal Design of JPEG Hardware under the Approximate Computing Paradigm.** (University of Minnesota, TAMU)\n- Perform-ML: Performance Optimized Machine Learning by Platform and Content Aware Customization. (Rice University, UCSD)\n- Low-Power Approximate Convolution Computing Unit with Domain-Wall Motion Based “Spin-Memristor” for Image Processing Applications. (Purdue University)\n- Cross-Layer Approximations for Neuromorphic Computing: From Devices to Circuits and Systems. (Purdue University)\n- Switched by Input: Power Efficient Structure for RRAM-based Convolutional Neural Network. (Tsinghua University)\n- A 2.2 GHz SRAM with High Temperature Variation Immunity for Deep Learning Application under 28nm. (UCLA, Bell Labs)\n\n### 2016 ISSCC\n- **A 1.42TOPS\u002FW Deep Convolutional Neural Network Recognition Processor for Intelligent IoE Systems.** (KAIST)\n- **Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks.** (MIT, NVIDIA)\n- A 126.1mW Real-Time Natural UI\u002FUX Processor with Embedded Deep Learning Core for Low-Power Smart Glasses Systems. (KAIST)\n- A 502GOPS and 0.984mW Dual-Mode ADAS SoC with RNN-FIS Engine for Intention Prediction in Automotive Black-Box System. (KAIST)\n- A 0.55V 1.1mW Artificial-Intelligence Processor with PVT Compensation for Micro Robots. (KAIST)\n- A 4Gpixel\u002Fs 8\u002F10b H.265\u002FHEVC Video Decoder Chip for 8K Ultra HD Applications. (Waseda University)\n\n### 2016 ISCA\n - **Cnvlutin: Ineffectual-Neuron-Free Deep Convolutional Neural Network Computing.** (University of Toronto, University of British Columbia)\n - **EIE: Efficient Inference Engine on Compressed Deep Neural Network.** (Stanford University, Tsinghua University)\n - **Minerva: Enabling Low-Power, High-Accuracy Deep Neural Network Accelerators.** (Harvard University)\n - **Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks.** (MIT, NVIDIA)\n   - *Present an energy analysis framework.*\n   - *Propose an energy-efficienct dataflow called Row Stationary, which considers three levels of reuse.*\n - **Neurocube: A Programmable Digital Neuromorphic Architecture with High-Density 3D Memory.** (Georgia Institute of Technology, SRI International)\n   - *Propose an architecture integrated in 3D DRAM, with a mesh-like NOC in the logic layer.*\n   - *Detailedly describe the data movements in the NOC.*\n - ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars. (University of Utah, HP Labs)\n   - *An advance over ISAAC has been published in \"Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration\" (IEEE Micro).*\n - A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory. (UCSB, HP Labs, NVIDIA, Tsinghua University)\n - RedEye: Analog ConvNet Image Sensor Architecture for Continuous Mobile Vision. (Rice University)\n - Cambricon: An Instruction Set Architecture for Neural Networks. (Chinese Academy of Sciences, UCSB)\n\n### 2016 DATE\n- **The Neuro Vector Engine: Flexibility to Improve Convolutional Network Efficiency for Wearable Vision.** (Eindhoven University of Technology, Soochow University, TU Berlin)\n  - *Propose an SIMD accelerator for CNN.*\n- **Efficient FPGA Acceleration of Convolutional Neural Networks Using Logical-3D Compute Array.** (UNIST, Seoul National University)\n  - *The compute tile is organized on 3 dimensions: Tm, Tr, Tc.*\n- NEURODSP: A Multi-Purpose Energy-Optimized Accelerator for Neural Networks. (CEA LIST)\n- MNSIM: Simulation Platform for Memristor-Based Neuromorphic Computing System. (Tsinghua University, UCSB, Arizona State University)\n- Accelerated Artificial Neural Networks on FPGA for Fault Detection in Automotive Systems. (Nanyang Technological University, University of Warwick)\n- Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks. (Purdue University)\n\n### 2016 FPGA\n- **Going Deeper with Embedded FPGA Platform for Convolutional Neural Network.** \\[[Slides](http:\u002F\u002Fwww.isfpga.org\u002Ffpga2016\u002Findex_files\u002FSlides\u002F1_2.pdf)\\]\\[[Demo](http:\u002F\u002Fwww.isfpga.org\u002Ffpga2016\u002Findex_files\u002FSlides\u002F1_2_demo.m4v)\\] (Tsinghua University, MSRA)\n  - *The first work I see, which runs the entire flow of CNN, including both CONV and FC layers.*\n  - *Point out that CONV layers are computational-centric, while FC layrers are memory-centric.*\n  - *The FPGA runs VGG16-SVD without reconfiguring its resources, but the convolver can only support k=3.*\n  - *Dynamic-precision data quantization is creative, but not implemented on hardware.*\n- **Throughput-Optimized OpenCL-based FPGA Accelerator for Large-Scale Convolutional Neural Networks.** \\[[Slides](http:\u002F\u002Fwww.isfpga.org\u002Ffpga2016\u002Findex_files\u002FSlides\u002F1_1.pdf)\\] (Arizona State Univ, ARM)\n  - *Spatially allocate FPGA's resources to CONV\u002FPOOL\u002FNORM\u002FFC layers.*\n\n### 2016 ASPDAC\n- **Design Space Exploration of FPGA-Based Deep Convolutional Neural Networks.** (UC Davis)\n- **LRADNN: High-Throughput and Energy-Efficient Deep Neural Network Accelerator using Low Rank Approximation.** (Hong Kong University of Science and Technology, Shanghai Jiao Tong University)\n- **Efficient Embedded Learning for IoT Devices.** (Purdue University)\n- ACR: Enabling Computation Reuse for Approximate Computing. (Chinese Academy of Sciences)\n\n### 2016 VLSI\n- **A 0.3‐2.6 TOPS\u002FW Precision‐Scalable Processor for Real‐Time Large‐Scale ConvNets.** (KU Leuven)\n  - *Use dynamic precision for different CONV layers, and scales down the MAC array's supply voltage at lower precision.*\n  - *Prevent memory fetches and MAC operations based on the ReLU sparsity.*\n- **A 1.40mm2 141mW 898GOPS Sparse Neuromorphic Processor in 40nm CMOS.** (University of Michigan)\n- A 58.6mW Real-Time Programmable Object Detector with Multi-Scale Multi-Object Support Using Deformable Parts Model on 1920x1080 Video at 30fps. (MIT)\n- A Machine-learning Classifier Implemented in a Standard 6T SRAM Array. (Princeton)\n\n### 2016 ICCAD\n- **Efficient Memory Compression in Deep Neural Networks Using Coarse-Grain Sparsification for Speech Applications.** (Arizona State University)\n- **Memsqueezer: Re-architecting the On-chip memory Sub-system of Deep Learning Accelerator for Embedded Devices.** (Chinese Academy of Sciences)\n- **Caffeine: Towards Uniformed Representation and Acceleration for Deep Convolutional Neural Networks.** (Peking University, UCLA, Falcon)\n  - *Propose a uniformed convolutional matrix-multiplication representation for accelerating CONV and FC layers on FPGA.*\n  - *Propose a weight-major convolutional mapping method for FC layers, which has good data reuse, DRAM access burst length and effective bandwidth.*\n- **BoostNoC: Power Efficient Network-on-Chip Architecture for Near Threshold Computing.** (Utah State University)\n- Design of Power-Efficient Approximate Multipliers for Approximate Artificial Neural Network. (Brno University of Technology, Brno University of Technology)\n- Neural Networks Designing Neural Networks: Multi-Objective Hyper-Parameter Optimization. (McGill University)\n\n### 2016 MICRO\n- **From High-Level Deep Neural Models to FPGAs.** (Georgia Institute of Technology, Intel)\n  - *Develop a macro dataflow ISA for DNN accelerators.*\n  - *Develop hand-optimized template designs that are scalable and highly customizable.*\n  - *Provide a Template Resource Optimization search algorithm to co-optimize the accelerator architecture and scheduling.*\n- **vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design.** (NVIDIA)\n- **Stripes: Bit-Serial Deep Neural Network Computing.** (University of Toronto, University of British Columbia)\n  - *Introduce serial computation and reduced precision computation to neural network accelerator designs, enabling accuracy vs. performance trade-offs.*\n  - *Design a bit-serial computing unit to enable linear scaling the performance with precision reduction.*\n- **Cambricon-X: An Accelerator for Sparse Neural Networks.** (Chinese Academy of Sciences)\n- **NEUTRAMS: Neural Network Transformation and Co-design under Neuromorphic Hardware Constraints.** (Tsinghua University, UCSB)\n- **Fused-Layer CNN Accelerators.** (Stony Brook University)\n  - *Fuse multiple CNN layers (CONV+POOL) to reduce DRAM access for input\u002Foutput data.*\n- **Bridging the I\u002FO Performance Gap for Big Data Workloads: A New NVDIMM-based Approach.** (The Hong Kong Polytechnic University, NSF\u002FUniversity of Florida)\n- **A Patch Memory System For Image Processing and Computer Vision.** (NVIDIA)\n- **An Ultra Low-Power Hardware Accelerator for Automatic Speech Recognition.** (Universitat Politecnica de Catalunya)\n- Perceptron Learning for Reuse Prediction. (TAMU, Intel Labs)\n  - *Train neural networks to predict reuse of cache blocks.*\n- A Cloud-Scale Acceleration Architecture. (Microsoft Research)\n- Reducing Data Movement Energy via Online Data Clustering and Encoding. (University of Rochester)\n- The Microarchitecture of a Real-time Robot Motion Planning Accelerator. (Duke University)\n- Chameleon: Versatile and Practical Near-DRAM Acceleration Architecture for Large Memory Systems. (UIUC, Seoul National University)\n\n### 2016 FPL\n- **A High Performance FPGA-based Accelerator for Large-Scale Convolutional Neural Network.** (Fudan University)\n- **Overcoming Resource Underutilization in Spatial CNN Accelerators.** (Stony Brook University)\n  - *Build multiple accelerators, each specialized for specific CNN layers, instead of a single accelerator with uniform tiling parameters.*\n- **Accelerating Recurrent Neural Networks in Analytics Servers: Comparison of FPGA, CPU, GPU, and ASIC.** (Intel)\n\n### 2016 HPCA\n- **A Performance Analysis Framework for Optimizing OpenCL Applications on FPGAs.** (Nanyang Technological University, HKUST, Cornell University)\n- **TABLA: A Unified Template-based Architecture for Accelerating Statistical Machine Learning.** (Georgia Institute of Technology)\n- Memristive Boltzmann Machine: A Hardware Accelerator for Combinatorial Optimization and Deep Learning. (University of Rochester)\n\n### 2017 FPGA\n- **An OpenCL Deep Learning Accelerator on Arria 10.** (Intel)\n  - *Minimum bandwidth requirement: All the intermediate data in AlexNet's CONV layers are cached in the on-chip buffer, so their architecture is compute-bound.*\n  - *Reduced operations: Winograd transformation.*\n  - *High usage of the available DSPs+Reduced computation -> Higher performance on FPGA -> Competitive efficiency vs. TitanX.*\n- **ESE: Efficient Speech Recognition Engine for Compressed LSTM on FPGA.** (Stanford University, DeepPhi, Tsinghua University, NVIDIA)\n- **FINN: A Framework for Fast, Scalable Binarized Neural Network Inference.** (Xilinx, Norwegian University of Science and Technology, University of Sydney)\n- **Can FPGA Beat GPUs in Accelerating Next-Generation Deep Neural Networks?** (Intel)\n- **Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs.** (Cornell University, UCLA, UCSD)\n- **Improving the Performance of OpenCL-based FPGA Accelerator for Convolutional Neural Network.** (UW-Madison)\n- **Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System.** (USC)\n- **Optimizing Loop Operation and Dataflow in FPGA Acceleration of Deep Convolutional Neural Networks.** (Arizona State University)\n\n### 2017 ISSCC\n- **A 2.9TOPS\u002FW Deep Convolutional Neural Network SoC in FD-SOI 28nm for Intelligent Embedded Systems.** (ST)\n- **DNPU: An 8.1TOPS\u002FW Reconfigurable CNN-RNN Processor for General Purpose Deep Neural Networks.** (KAIST)\n- **ENVISION: A 0.26-to-10TOPS\u002FW Subword-Parallel Computational Accuracy-Voltage-Frequency-Scalable Convolutional Neural Network Processor in 28nm FDSOI.** (KU Leuven)\n- **A 288µW Programmable Deep-Learning Processor with 270KB On-Chip Weight Storage Using Non-Uniform Memory Hierarchy for Mobile Intelligence.** (University of Michigan, CubeWorks)\n- A 28nm SoC with a 1.2GHz 568nJ\u002FPrediction Sparse Deep-NeuralNetwork Engine with >0.1 Timing Error Rate Tolerance for IoT Applications. (Harvard)\n- A Scalable Speech Recognizer with Deep-Neural-Network Acoustic Models and Voice-Activated Power Gating (MIT)\n- A 0.62mW Ultra-Low-Power Convolutional-Neural-Network Face Recognition Processor and a CIS Integrated with Always-On Haar-Like Face Detector. (KAIST)\n\n### 2017 HPCA\n- **FlexFlow: A Flexible Dataflow Accelerator Architecture for Convolutional Neural Networks.** (Chinese Academy of Sciences)\n- **PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning.** (University of Pittsburgh, University of Southern California)\n- Towards Pervasive and User Satisfactory CNN across GPU Microarchitectures. (University of Florida)\n  - *Satisfaction of CNN (SoC) is the combination of SoCtime, SoCaccuracy and energy consumption.*\n  - *The P-CNN framework is composed of offline compilation and run-time management.*\n    - *Offline compilation: Generally optimizes runtime, and generates scheduling configurations for the run-time stage.*\n    - *Run-time management: Generates tuning tables through accuracy tuning, and calibrate accuracy+runtime (select the best tuning table) during the long-term execution.*\n- Supporting Address Translation for Accelerator-Centric Architectures. (UCLA)\n\n### 2017 ASPLOS\n- **Tetris: Scalable and Efficient Neural Network Acceleration with 3D Memory.** (Stanford University)\n  - *Move accumulation operations close to the DRAM banks.*\n  - *Develop a hybrid partitioning scheme that parallelizes the NN computations over multiple accelerators.*\n- SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing. (Syracuse University, USC, The City College of New York)\n\n### 2017 ISCA\n- **Maximizing CNN Accelerator Efficiency Through Resource Partitioning.** (Stony Brook University)\n  - *An Extension of their FPL'16 paper.*\n- **In-Datacenter Performance Analysis of a Tensor Processing Unit.** (Google)\n- **SCALEDEEP: A Scalable Compute Architecture for Learning and Evaluating Deep Networks.** (Purdue University, Intel)\n  - *Propose a full-system (server node) architecture, focusing on the challenge of DNN training (intra and inter-layer heterogeneity).*\n- **SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks.** (NVIDIA, MIT, UC Berkeley, Stanford University)\n- **Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism.** (University of Michigan, ARM)\n- Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent. (Stanford)\n- LogCA: A High-Level Performance Model for Hardware Accelerators. (AMD, University of Wisconsin-Madison)\n- APPROX-NoC: A Data Approximation Framework for Network-On-Chip Architectures. (TAMU)\n\n### 2017 FCCM\n- **Escher: A CNN Accelerator with Flexible Buffering to Minimize Off-Chip Transfer.** (Stony Brook University)\n- **Customizing Neural Networks for Efficient FPGA Implementation.**\n- **Evaluating Fast Algorithms for Convolutional Neural Networks on FPGAs.**\n- **FP-DNN: An Automated Framework for Mapping Deep Neural Networks onto FPGAs with RTL-HLS Hybrid Templates.** (Peking University, HKUST, MSRA, UCLA)\n  - *Compute-instensive part: RTL-based generalized matrix multiplication kernel.*\n  - *Layer-specific part: HLS-based control logic.*\n  - *Memory-instensive part: Several techniques for lower DRAM bandwidth requirements.*\n- FPGA accelerated Dense Linear Machine Learning: A Precision-Convergence Trade-off.\n- A Configurable FPGA Implementation of the Tanh Function using DCT Interpolation.\n\n### 2017 DAC\n- **Deep^3: Leveraging Three Levels of Parallelism for Efficient Deep Learning.** (UCSD, Rice)\n- **Real-Time meets Approximate Computing: An Elastic Deep Learning Accelerator Design with Adaptive Trade-off between QoS and QoR.** (CAS)\n  - *I'm not sure whether the proposed tuning scenario and direction are reasonable enough to find out feasible solutions.*\n- **Exploring Heterogeneous Algorithms for Accelerating Deep Convolutional Neural Networks on FPGAs.** (PKU, CUHK, SenseTime)\n- **Hardware-Software Codesign of Highly Accurate, Multiplier-free Deep Neural Networks.** (Brown University)\n- **A Kernel Decomposition Architecture for Binary-weight Convolutional Neural Networks.** (KAIST)\n- **Design of An Energy-Efficient Accelerator for Training of Convolutional Neural Networks using Frequency-Domain Computation.** (Georgia Tech)\n- **New Stochastic Computing Multiplier and Its Application to Deep Neural Networks.** (UNIST)\n- **TIME: A Training-in-memory Architecture for Memristor-based Deep Neural Networks.** (THU, UCSB)\n- **Fault-Tolerant Training with On-Line Fault Detection for RRAM-Based Neural Computing Systems.** (THU, Duke)\n- **Automating the systolic array generation and optimizations for high throughput convolution neural network.** (PKU, UCLA, Falcon)\n- **Towards Full-System Energy-Accuracy Tradeoffs: A Case Study of An Approximate Smart Camera System.** (Purdue)\n  - *Synergistically tunes componet-level approximation knobs to achieve system-level energy-accuracy tradeoffs.*\n- **Error Propagation Aware Timing Relaxation For Approximate Near Threshold Computing.** (KIT)\n- RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks. (Purdue)\n- Rescuing Memristor-based Neuromorphic Design with High Defects. (University of Pittsburgh, HP Lab, Duke)\n- Group Scissor: Scaling Neuromorphic Computing Design to Big Neural Networks. (University of Pittsburgh, Duke)\n- Towards Aging-induced Approximations. (KIT, UT Austin)\n- SABER: Selection of Approximate Bits for the Design of Error Tolerant Circuits. (University of Minnesota, TAMU)\n- On Quality Trade-off Control for Approximate Computing using Iterative Training. (SJTU, CUHK)\n\n### 2017 DATE\n- **DVAFS: Trading Computational Accuracy for Energy Through Dynamic-Voltage-Accuracy-Frequency-Scaling.** (KU Leuven)\n- **Accelerator-friendly Neural-network Training: Learning Variations and Defects in RRAM Crossbar.** (Shanghai Jiao Tong University, University of Pittsburgh, Lynmax Research)\n- **A Novel Zero Weight\u002FActivation-Aware Hardware Architecture of Convolutional Neural Network.** (Seoul National University)\n  - *Solve the zero-induced load imbalance problem.*\n- **Understanding the Impact of Precision Quantization on the Accuracy and Energy of Neural Networks.** (Brown University)\n- **Design Space Exploration of FPGA Accelerators for Convolutional Neural Networks.** (Samsung, UNIST, Seoul National University)\n- **MoDNN: Local Distributed Mobile Computing System for Deep Neural Network.** (University of Pittsburgh, George Mason University, University of Maryland)\n- **Chain-NN: An Energy-Efficient 1D Chain Architecture for Accelerating Deep Convolutional Neural Networks.** (Waseda University)\n- **LookNN: Neural Network with No Multiplication.** (UCSD)\n  - *Cluster weights and use LUT to avoid multiplication.*\n- Energy-Efficient Approximate Multiplier Design using Bit Significance-Driven Logic Compression. (Newcastle University)\n- Revamping Timing Error Resilience to Tackle Choke Points at NTC Systems. (Utah State University)\n\n### 2017 VLSI\n- **A 3.43TOPS\u002FW 48.9pJ\u002FPixel 50.1nJ\u002FClassification 512 Analog Neuron Sparse Coding Neural Network with On-Chip Learning and Classification in 40nm CMOS.** (University of Michigan, Intel)\n- **BRein Memory: A 13-Layer 4.2 K Neuron\u002F0.8 M Synapse Binary\u002FTernary Reconfigurable In-Memory Deep Neural Network Accelerator in 65 nm CMOS.** (Hokkaido University, Tokyo Institute of Technology, Keio University)\n- **A 1.06-To-5.09 TOPS\u002FW Reconfigurable Hybrid-Neural-Network Processor for Deep Learning Applications.** (Tsinghua University)\n- **A 127mW 1.63TOPS sparse spatio-temporal cognitive SoC for action classification and motion tracking in videos.** (University of Michigan)\n\n### 2017 ICCAD\n- **AEP: An Error-bearing Neural Network Accelerator for Energy Efficiency and Model Protection.** (University of Pittsburgh)\n- VoCaM: Visualization oriented convolutional neural network acceleration on mobile system. (George Mason University, Duke)\n- AdaLearner: An Adaptive Distributed Mobile Learning System for Neural Networks. (Duke)\n- MeDNN: A Distributed Mobile System with Enhanced Partition and Deployment for Large-Scale DNNs. (Duke)\n- TraNNsformer: Neural Network Transformation for Memristive Crossbar based Neuromorphic System Design. (Purdue).\n- A Closed-loop Design to Enhance Weight Stability of Memristor Based Neural Network Chips. (Duke)\n- Fault injection attack on deep neural network. (CUHK)\n- ORCHARD: Visual Object Recognition Accelerator Based on Approximate In-Memory Processing. (UCSD)\n\n### 2017 HotChips\n- **A Dataflow Processing Chip for Training Deep Neural Networks.** (Wave Computing)\n- **Brainwave: Accelerating Persistent Neural Networks at Datacenter Scale.** (Microsoft)\n- **DNN ENGINE: A 16nm Sub-uJ Deep Neural Network Inference Accelerator for the Embedded Masses.** (Harvard, ARM)\n- **DNPU: An Energy-Efficient Deep Neural Network Processor with On-Chip Stereo Matching.** (KAIST)\n- **Evaluation of the Tensor Processing Unit (TPU): A Deep Neural Network Accelerator for the Datacenter.** (Google)\n- NVIDIA’s Volta GPU: Programmability and Performance for GPU Computing. (NVIDIA)\n- Knights Mill: Intel Xeon Phi Processor for Machine Learning. (Intel)\n- XPU: A programmable FPGA Accelerator for diverse workloads. (Baidu)\n\n### 2017 MICRO\n- **Bit-Pragmatic Deep Neural Network Computing.** (NVIDIA, University of Toronto)\n- **CirCNN: Accelerating and Compressing Deep Neural Networks Using Block-Circulant Weight Matrices.** (Syracuse University, City University of New York, USC, California State University, Northeastern University)\n- **DRISA: A DRAM-based Reconfigurable In-Situ Accelerator.** (UCSB, Samsung)\n- **Scale-Out Acceleration for Machine Learning.** (Georgia Tech, UCSD)\n  - Propose CoSMIC, a full computing stack constituting language, compiler, system software, template architecture, and circuit generators, that enable programmable acceleration of learning at scale.\n- DeftNN: Addressing Bottlenecks for DNN Execution on GPUs via Synapse Vector Elimination and Near-compute Data Fission. (Univ. of Michigan, Univ. of Nevada)\n- Data Movement Aware Computation Partitioning. (PSU, TOBB University of Economics and Technology)\n  - *Partition computation on a manycore system for near data processing.*\n\n### 2018 ASPDAC\n- **ReGAN: A Pipelined ReRAM-Based Accelerator for Generative Adversarial Networks.** (University of Pittsburgh, Duke)\n- **Accelerator-centric Deep Learning Systems for Enhanced Scalability, Energy-efficiency, and Programmability.** (POSTECH)\n- **Architectures and Algorithms for User Customization of CNNs.** (Seoul National University, Samsung)\n- **Optimizing FPGA-based Convolutional Neural Networks Accelerator for Image Super-Resolution.** (Sogang University)\n- **Running sparse and low-precision neural network: when algorithm meets hardware.** (Duke)\n\n### 2018 ISSCC\n- **A 55nm Time-Domain Mixed-Signal Neuromorphic Accelerator with Stochastic Synapses and Embedded Reinforcement Learning for Autonomous Micro-Robots.** (Georgia Tech)\n- **A Shift Towards Edge Machine-Learning Processing.** (Google)\n- **QUEST: A 7.49TOPS Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96MB 3D SRAM Using Inductive-Coupling Technology in 40nm CMOS.** (Hokkaido University, Ultra Memory, Keio University)\n- **UNPU: A 50.6TOPS\u002FW Unified Deep Neural Network Accelerator with 1b-to-16b Fully-Variable Weight Bit-Precision.** (KAIST)\n- **A 9.02mW CNN-Stereo-Based Real-Time 3D Hand-Gesture Recognition Processor for Smart Mobile Devices.** (KAIST)\n- **An Always-On 3.8μJ\u002F86% CIFAR-10 Mixed-Signal Binary CNN Processor with All Memory on Chip in 28nm CMOS.** (Stanford, KU Leuven)\n- **Conv-RAM: An Energy-Efficient SRAM with Embedded Convolution Computation for Low-Power CNN-Based Machine Learning Applications.** (MIT)\n- **A 42pJ\u002FDecision 3.12TOPS\u002FW Robust In-Memory Machine Learning Classifier with On-Chip Training.** (UIUC)\n- **Brain-Inspired Computing Exploiting Carbon Nanotube FETs and Resistive RAM: Hyperdimensional Computing Case Study.** (Stanford, UC Berkeley, MIT)\n- **A 65nm 1Mb Nonvolatile Computing-in-Memory ReRAM Macro with Sub-16ns Multiply-and-Accumulate for Binary DNN AI Edge Processors.** (NTHU)\n- **A 65nm 4Kb Algorithm-Dependent Computing-in-Memory SRAM Unit Macro with 2.3ns and 55.8TOPS\u002FW Fully Parallel Product-Sum Operation for Binary DNN Edge Processors.** (NTHU, TSMC, UESTC, ASU)\n- **A 1μW Voice Activity Detector Using Analog Feature Extraction and Digital Deep Neural Network.** (Columbia University)\n\n### 2018 HPCA\n- **Making Memristive Neural Network Accelerators Reliable.** (University of Rochester)\n- **Towards Efficient Microarchitectural Design for Accelerating Unsupervised GAN-based Deep Learning.** (University of Florida)\n- **Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks.** (POSTECH, NVIDIA, UT-Austin)\n- **In-situ AI: Towards Autonomous and Incremental Deep Learning for IoT Systems.** (University of Florida, Chongqing University, Capital Normal University)\n- RC-NVM: Enabling Symmetric Row and Column Memory Accesses for In-Memory Databases. (PKU, NUDT, Duke, UCLA, PSU)\n- GraphR: Accelerating Graph Processing Using ReRAM. (Duke, USC, Binghamton University SUNY)\n- GraphP: Reducing Communication of PIM-based Graph Processing with Efficient Data Partition. (THU, USC, Stanford)\n- PM3: Power Modeling and Power Management for Processing-in-Memory. (PKU)\n\n### 2018 ASPLOS\n- **Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Network Compiler.** (Tsinghua, UCSB)\n- **MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Reconfigurable Interconnects.** (Georgia Tech)\n  - *Higher PE utilization: Use an augmented reduction tree (reconfigurable interconnects) to construct arbitrary sized virtual neurons.*\n- **VIBNN: Hardware Acceleration of Bayesian Neural Networks.** (Syracuse University, USC)\n- Exploiting Dynamical Thermal Energy Harvesting for Reusing in Smartphone with Mobile Applications. (Guizhou University, University of Florida)\n- Potluck: Cross-application Approximate Deduplication for Computation-Intensive Mobile Applications. (Yale)\n\n### 2018 VLSI\n- **STICKER: A 0.41‐62.1 TOPS\u002FW 8bit Neural Network Processor with Multi‐Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers.** (THU)\n- **2.9TOPS\u002FW Reconfigurable Dense\u002FSparse Matrix‐Multiply Accelerator with Unified INT8\u002FINT16\u002FFP16 Datapath in 14nm Tri‐gate CMOS.** (Intel)\n- **A Scalable Multi‐TeraOPS Deep Learning Processor Core for AI Training and Inference.** (IBM)\n- **An Ultra‐high Energy‐efficient reconfigurable Processor for Deep Neural Networks with Binary\u002FTernary Weights in 28nm CMOS.** (THU)\n- **B‐Face: 0.2 mW CNN‐Based Face Recognition Processor with Face Alignment for Mobile User Identification.** (KAIST)\n- **A 141 uW, 2.46 pJ\u002FNeuron Binarized Convolutional Neural Network based Self-learning Speech Recognition Processor in 28nm CMOS.** (THU)\n- **A Mixed‐Signal Binarized Convolutional‐Neural-Network Accelerator Integrating Dense Weight Storage and Multiplication for Reduced Data Movement.** (Princeton)\n- **PhaseMAC: A 14 TOPS\u002FW 8bit GRO based Phase Domain MAC Circuit for In‐Sensor‐Computed Deep Learning Accelerators.** (Toshiba)\n\n### 2018 FPGA\n- **C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs.** (Peking Univ, Syracuse Univ, CUNY)\n- **DeltaRNN: A Power-efficient Recurrent Neural Network Accelerator.** (ETHZ, BenevolentAI)\n- **Towards a Uniform Template-based Architecture for Accelerating 2D and 3D CNNs on FPGA.** (National Univ of Defense Tech)\n- **A Customizable Matrix Multiplication Framework for the Intel HARPv2 Xeon+FPGA Platform - A Deep Learning Case Study.** (The Univ of Sydney, Intel)\n- **A Framework for Generating High Throughput CNN Implementations on FPGAs.** (USC)\n- Liquid Silicon: A Data-Centric Reconfigurable Architecture enabled by RRAM Technology. (UW Madison)\n\n### 2018 ISCA\n- **RANA: Towards Efficient Neural Acceleration with Refresh-Optimized Embedded DRAM.** (THU)\n- **Brainwave: A Configurable Cloud-Scale DNN Processor for Real-Time AI.** (Microsoft)\n- **PROMISE: An End-to-End Design of a Programmable Mixed-Signal Accelerator for Machine Learning Algorithms.** (UIUC)\n- **Computation Reuse in DNNs by Exploiting Input Similarity.** (UPC)\n- **GANAX: A Unified SIMD-MIMD Acceleration for Generative Adversarial Network.** (Georiga Tech, IPM, Qualcomm, UCSD, UIUC)\n- **SnaPEA: Predictive Early Activation for Reducing Computation in Deep Convolutional Neural Networks.** (UCSD, Georgia Tech, Qualcomm)\n- **UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition.** (UIUC, NVIDIA)\n- **An Energy-Efficient Neural Network Accelerator based on Outlier-Aware Low Precision Computation.** (Seoul National)\n- **Prediction based Execution on Deep Neural Networks.** (Florida)\n- **Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks.** (Georgia Tech, ARM, UCSD)\n- **Gist: Efficient Data Encoding for Deep Neural Network Training.** (Michigan, Microsoft, Toronto)\n- **The Dark Side of DNN Pruning.** (UPC)\n- **Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks.** (Michigan)\n- EVA^2: Exploiting Temporal Redundancy in Live Computer Vision. (Cornell)\n- Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision. (Rochester, Georgia Tech, ARM)\n- Feature-Driven and Spatially Folded Digital Neurons for Efficient Spiking Neural Network Simulations. (POSTECH\u002FBerkeley, Seoul National)\n- Space-Time Algebra: A Model for Neocortical Computation. (Wisconsin)\n- Scaling Datacenter Accelerators With Compute-Reuse Architectures. (Princeton)\n   - *Add a NVM-based storage layer to the accelerator, for computation reuse.*\n- Enabling Scientific Computing on Memristive Accelerators. (Rochester)\n\n### 2018 DATE\n- **MATIC: Learning Around Errors for Efficient Low-Voltage Neural Network Accelerators.** (University of Washington)\n   - *Learn around errors resulting from SRAM voltage scaling, demonstrated on a fabricated 65nm test chip.*\n- **Maximizing System Performance by Balancing Computation Loads in LSTM Accelerators.** (POSTECH)\n   - *Sparse matrix format that load balances computation, demonstrated for LSTMs.*\n- **CCR: A Concise Convolution Rule for Sparse Neural Network Accelerators.** (CAS)\n   - *Decompose convolution into multiple dense and zero kernels for sparsity savings.*\n- **Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA.** (CAS)\n- **moDNN: Memory Optimal DNN Training on GPUs.** (University of Notre Dame, CAS)\n- HyperPower: Power and Memory-Constrained Hyper-Parameter Optimization for Neural Networks. (CMU, Google)\n\n### 2018 DAC\n- **Compensated-DNN: Energy Efficient Low-Precision Deep Neural Networks by Compensating Quantization Errors.** (**Best Paper**, Purdue, IBM)\n  - *Introduce a new fixed-point representation, Fixed Point with Error Compensation (FPEC): Computation bits, +compensation bits that represent quantization error.*\n  - *Propose a low-overhead sparse compensation scheme to estimate the error in MAC design.*\n- **Calibrating Process Variation at System Level with In-Situ Low-Precision Transfer Learning for Analog Neural Network Processors.** (THU)\n- **DPS: Dynamic Precision Scaling for Stochastic Computing-Based Deep Neural Networks.** (UNIST)\n- **DyHard-DNN: Even More DNN Acceleration With Dynamic Hardware Reconfiguration.** (Univ. of Virginia)\n- **Exploring the Programmability for Deep Learning Processors: from Architecture to Tensorization.** (Univ. of Washington)\n- **LCP: Layer Clusters Paralleling Mapping Mechanism for Accelerating Inception and Residual Networks on FPGA.** (THU)\n- **A Kernel Decomposition Architecture for Binary-weight Convolutional Neural Networks.** (THU)\n- **Ares: A Framework for Quantifying the Resilience of Deep Neural Networks.** (Harvard)\n- **ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient\nDeep Learning Accelerators** (New York Univ., IIT Kanpur)\n- **Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks.** (Univ. of Toronto)\n- **Parallelizing SRAM Arrays with Customized Bit-Cell for Binary Neural Networks.** (Arizona)\n- **Thermal-Aware Optimizations of ReRAM-Based Neuromorphic Computing Systems.** (Northwestern Univ.)\n- **SNrram: An Efficient Sparse Neural Network Computation Architecture Based on Resistive RandomAccess Memory.** (THU, UCSB)\n- **Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification.** (THU, CAS, MIT)\n- **Bandwidth-Efficient Deep Learning.** (MIT, Stanford)\n- **Co-Design of Deep Neural Nets and Neural Net Accelerators for Embedded Vision Applications.** (Berkeley)\n- **Sign-Magnitude SC: Getting 10X Accuracy for Free in Stochastic Computing for Deep Neural Networks.** (UNIST)\n- **DrAcc: A DRAM Based Accelerator for Accurate CNN Inference.** (National Univ. of Defense Technology, Indiana Univ., Univ. of Pittsburgh)\n- **On-Chip Deep Neural Network Storage With Multi-Level eNVM.** (Harvard)\n- VRL-DRAM: Improving DRAM Performance via Variable Refresh Latency. (Drexel Univ., ETHZ)\n\n### 2018 HotChips\n- **ARM's First Generation ML Processor.** (ARM)\n- **The NVIDIA Deep Learning Accelerator.** (NVIDIA)\n- **Xilinx Tensor Processor: An Inference Engine, Network Compiler + Runtime for Xilinx FPGAs.** (Xilinx)\n- Tachyum Cloud Chip for Hyperscale workloads, deep ML, general, symbolic and bio AI. (Tachyum)\n- SMIV: A 16nm SoC with Efficient and Flexible DNN Acceleration for Intelligent IoT Devices. (ARM)\n- NVIDIA's Xavier System-on-Chip. (NVIDIA)\n- Xilinx Project Everest: HW\u002FSW Programmable Engine. (Xilinx)\n\n### 2018 ICCAD\n- **Tetris: Re-architecting Convolutional Neural Network Computation for Machine Learning Accelerators.** (CAS)\n- **3DICT: A Reliable and QoS Capable Mobile Process-In-Memory Architecture for Lookup-based CNNs in 3D XPoint ReRAMs.** (Indiana - - University Bloomington, Florida International Univ.)\n- **TGPA: Tile-Grained Pipeline Architecture for Low Latency CNN Inference.** (PKU, UCLA, Falcon)\n- **NID: Processing Binary Convolutional Neural Network in Commodity DRAM.** (KAIST)\n- **Adaptive-Precision Framework for SGD using Deep Q-Learning.** (PKU)\n- **Efficient Hardware Acceleration of CNNs using Logarithmic Data Representation with Arbitrary log-base.** (Robert Bosch GmbH)\n- **C-GOOD: C-code Generation Framework for Optimized On-device Deep Learning.** (SNU)\n- **Mixed Size Crossbar based RRAM CNN Accelerator with Overlapped Mapping Method.** (THU)\n- **FCN-Engine: Accelerating Deconvolutional Layers in Classic CNN Processors.** (HUT, CAS, NUS)\n- **DNNBuilder: an Automated Tool for Building High-Performance DNN Hardware Accelerators for FPGAs.** (UIUC)\n- **DIMA: A Depthwise CNN In-Memory Accelerator.** (Univ. of Central Florida)\n- **EMAT: An Efficient Multi-Task Architecture for Transfer Learning using ReRAM.** (Duke)\n- **FATE: Fast and Accurate Timing Error Prediction Framework for Low Power DNN Accelerator Design.** (NYU)\n- **Designing Adaptive Neural Networks for Energy-Constrained Image Classification.** (CMU)\n- Watermarking Deep Neural Networks for Embedded Systems. (UCLA)\n- Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks. (Northeastern Univ., Boston Univ., Florida International Univ.)\n- A Cross-Layer Methodology for Design and Optimization of Networks in 2.5D Systems. (Boston Univ., UCSD)\n\n### 2018 MICRO\n- **Addressing Irregularity in Sparse Neural Networks: A Cooperative Software\u002FHardware Approach.** (USTC, CAS)\n- **Diffy: a Deja vu-Free Differential Deep Neural Network Accelerator.** (University of Toronto)\n- **Beyond the Memory Wall: A Case for Memory-centric HPC System for Deep Learning.** (KAIST)\n- **Towards Memory Friendly Long-Short Term Memory Networks (LSTMs) on Mobile GPUs.** (University of Houston, Capital Normal University)\n- **A Network-Centric Hardware\u002FAlgorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks.** (UIUC, THU, SJTU, Intel, UCSD)\n- **PermDNN: Efficient Compressed Deep Neural Network Architecture with Permuted Diagonal Matrices.** (City University of New York, University of Minnesota, USC)\n- **GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware.** (Georgia Tech)\n- **Processing-in-Memory for Energy-efficient Neural Network Training: A Heterogeneous Approach.** (UCM, UCSD, UCSC)\n  - Schedules computing resources provided by CPU and heterogeneous PIMs (fixed-function logic + programmable ARM cores), to optimized energy efficiency and hardware utilization.\n- **LerGAN: A Zero-free, Low Data Movement and PIM-based GAN Architecture.** (THU, University of Florida)\n- **Multi-dimensional Parallel Training of Winograd Layer through Distributed Near-Data Processing.** (KAIST)\n  - Winograd is applied to training to extend traditional data parallelsim with a new dimension named intra-tile parallelism. With intra-tile parallelism, nodes ara dividied into several groups, and weight update communication only occurs independtly in the group. The method shows better scalability for training clusters, as the total commnication doesn't scale with the increasing of node count.\n- **SCOPE: A Stochastic Computing Engine for DRAM-based In-situ Accelerator.** (UCSB, Samsung)\n- **Morph: Flexible Acceleration for 3D CNN-based Video Understanding.** (UIUC)\n- Inter-thread Communication in Multithreaded, Reconfigurable Coarse-grain Arrays. (Technion)\n- An Architectural Framework for Accelerating Dynamic Parallel Algorithms on Reconfigurable Hardware. (Cornell)\n\n### 2019 ASPDAC\n- **An N-way group association architecture and sparse data group association load balancing algorithm for sparse CNN accelerators.** (THU)\n- **TNPU: An Efficient Accelerator Architecture for Training Convolutional Neural Networks.** (ICT)\n- **NeuralHMC: An Efficient HMC-Based Accelerator for Deep Neural Networks.** (University of Pittsburgh, Duke)\n- **P3M: A PIM-based Neural Network Model Protection Scheme for Deep Learning Accelerator.** (ICT)\n- GraphSAR: A Sparsity-Aware Processing-in-Memory Architecture for Large-Scale Graph Processing on ReRAMs. (Tsinghua, MIT, Berkely)\n\n### 2019 ISSCC\n- **An 11.5TOPS\u002FW 1024-MAC Butterfly Structure Dual-Core Sparsity-Aware Neural Processing Unit in 8nm Flagship Mobile SoC.** (Samsung)\n- **A 20.5TOPS and 217.3GOPS\u002Fmm2 Multicore SoC with DNN Accelerator and Image Signal Processor Complying with ISO26262 for Automotive Applications.** (Toshiba)\n- **An 879GOPS 243mW 80fps VGA Fully Visual CNN-SLAM Processor for Wide-Range Autonomous Exploration.** (Michigan)\n- **A 2.1TFLOPS\u002FW Mobile Deep RL Accelerator with Transposable PE Array and Experience Compression.** (KAIST)\n- **A 65nm 0.39-to-140.3TOPS\u002FW 1-to-12b Unified Neural-Network Processor Using Block-Circulant-Enabled Transpose-Domain Acceleration with 8.1× Higher TOPS\u002Fmm2 and 6T HBST-TRAM-Based 2D Data-Reuse Architecture.** (THU, National Tsing Hua University, Northeastern University)\n- **A 65nm 236.5nJ\u002FClassification Neuromorphic Processor with 7.5% Energy Overhead On-Chip Learning Using Direct Spike-Only Feedback.** (SNU)\n- **LNPU: A 25.3TFLOPS\u002FW Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16.** (KAIST)\n- A 1Mb Multibit ReRAM Computing-In-Memory Macro with 14.6ns Parallel MAC Computing Time for CNN-Based AI Edge Processors. (National Tsing Hua University)\n- Sandwich-RAM: An Energy-Efficient In-Memory BWN Architecture with Pulse-Width Modulation. (Southeast University, Boxing Electronics, THU)\n- A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN Based Machine Learning. (National Tsing Hua University, University of Electronic Science and Technology of China, ASU, Georgia Tech)\n- A Reconfigurable RRAM Physically Unclonable Function Utilizing PostProcess Randomness Source with \u003C6×10-6 Native Bit Error Rate. (THU, National Tsing Hua University, Georgia Tech)\n- A 65nm 1.1-to-9.1TOPS\u002FW Hybrid-Digital-Mixed-Signal Computing Platform for Accelerating Model-Based and Model-Free Swarm Robotics. (Georgia Tech)\n- A Compute SRAM with Bit-Serial Integer\u002FFloating-Point Operations for Programmable In-Memory Vector Acceleration. (Michigan)\n- All-Digital Time-Domain CNN Engine Using Bidirectional Memory Delay Lines for Energy-Efficient Edge Computing. (UT Austin)\n\n### 2019 HPCA\n- **HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array.** (Duke, USC)\n- **E-RNN: Design Optimization for Efficient Recurrent Neural Networks in FPGAs.** (Syracuse University, Northeastern University, Florida International University, USC, University at Buffalo)\n- **Bit Prudent In-Cache Acceleration of Deep Convolutional Neural Networks.** (Michigan, Intel)\n- **Shortcut Mining: Exploiting Cross-layer Shortcut Reuse in DCNN Accelerators.** (OSU)\n- **NAND-Net: Minimizing Computational Complexity of In-Memory Processing for Binary Neural Networks.** (KAIST)\n- **Kelp: QoS for Accelerators in Machine Learning Platforms.** (Microsoft, Google, UT Austin)\n- **Machine Learning at Facebook: Understanding Inference at the Edge.** (Facebook)\n- The Accelerator Wall: Limits of Chip Specialization. (Princeton)\n\n### 2019 ASPLOS\n- **FA3C: FPGA-Accelerated Deep Reinforcement Learning.** (Hongik University, SNU)\n- **PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference.** (Purdue, UIUC, HP)\n- **FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture.** (THU, UCSB)\n- **Bit-Tactical: A Software\u002FHardware Approach to Exploiting Value and Bit Sparsity in Neural Networks.** (Toronto, NVIDIA)\n- **TANGRAM: Optimized Coarse-Grained Dataflow for Scalable NN Accelerators.** (Stanford)\n- **Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization.** (Harvard)\n- **Split-CNN: Splitting Window-based Operations in Convolutional Neural Networks for Memory System Optimization.** (IBM, Kyungpook National University)\n- **HOP: Heterogeneity-Aware Decentralized Training.** (USC, THU)\n- **Astra: Exploiting Predictability to Optimize Deep Learning.** (Microsoft)\n- **ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Methods of Multipliers.** (Northeastern, Syracuse, SUNY, Buffalo, USC)\n- **DeepSigns: An End-to-End Watermarking Framework for Protecting the Ownership of Deep Neural Networks.** (UCSD)\n\n### 2019 ISCA\n- **Sparse ReRAM Engine: Joint Exploration of Activation and Weight Sparsity on Compressed Neural Network.** (NTU, Academia Sinica, Macronix)\n- **MnnFast: A Fast and Scalable System Architecture for Memory-Augmented Neural Networks.** (POSTECH, SNU)\n- **TIE: Energy-efficient Tensor Train-based Inference Engine for Deep Neural Network.** (Rutgers University, Nanjing University, USC)\n- **Accelerating Distributed Reinforcement Learning with In-Switch Computing.** (UIUC)\n- **Eager Pruning: Algorithm and Architecture Support for Fast Training of Deep Neural Networks.** (University of Florida)\n- **Laconic Deep Learning Inference Acceleration.** (Toronto)\n- **DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks.** (UCSD)\n- **A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron Superconducting Technology.** (Northeastern, Yokohama National University, USC, University of Alberta)\n- **Fractal Machine Learning Computers.** (ICT)\n- **FloatPIM: In-Memory Acceleration of Deep Neural Network Training with High Precision.** (UCSD)\n- Energy-Efficient Video Processing for Virtual Reality. (UIUC, University of Rochester)\n- Scalable Interconnects for Reconfigurable Spatial Architectures. (Stanford)\n- CoNDA: Enabling Efficient Near-Data Accelerator Communication by Optimizing Data Movement. (CMU, ETHZ)\n\n### 2019 DAC\n- **Accuracy vs. Efficiency: Achieving Both through FPGA-Implementation Aware Neural Architecture Search.** (East China Normal University, Pittsburgh, Chongqing University, UCI,  Notre Dame)\n- **FPGA\u002FDNN Co-Design: An Efficient Design Methodology for IoT Intelligence on the Edge.** (UIUC, IBM, Inspirit IoT)\n- **An Optimized Design Technique of Low-Bit Neural Network Training for Personalization on IoT Devices.** (KAIST)\n- **ReForm: Static and Dynamic Resource-Aware DNN Reconfiguration Framework for Mobile Devices.** (George Mason, Clarkson)\n- **DRIS-3: Deep Neural Network Reliability Improvement Scheme in 3D Die-Stacked Memory based on Fault Analysis.** (Sungkyunkwan University)\n- **ZARA: A Novel Zero-free Dataflow Accelerator for Generative Adversarial Networks in 3D ReRAM.** (Duke)\n- **BitBlade: Area and Energy-Efficient Precision-Scalable Neural Network Accelerator with Bitwise Summation.** (POSTECH)\n- X-MANN: A Crossbar based Architecture for Memory Augmented Neural Networks. (Purdue, Intel)\n- Thermal-Aware Design and Management for Search-based In-Memory Acceleration. (UCSD)\n- An Energy-Efficient Network-on-Chip Design using Reinforcement Learning. (George Washington)\n- Designing Vertical Processors in Monolithic 3D. (UIUC)\n\n### 2019 MICRO\n- **Wire-Aware Architecture and Dataflow for CNN Accelerators.** (Utah)\n- **ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning.** (Toronto)\n- **Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture.** (NVIDIA)\n- **ZCOMP: Reducing DNN Cross-Layer Memory Footprint Using Vector Extensions.** (Google, Intel)\n- **Boosting the Performance of CNN Accelerators with Dynamic Fine-Grained Channel Gating.** (Cornell)\n- **SparTen: A Sparse Tensor Accelerator for Convolutional Neural Networks.** (Purdue)\n- **EDEN: Enabling Approximate DRAM for DNN Inference using Error-Resilient Neural Networks.** (ETHZ, CMU)\n- **eCNN: a Block-Based and Highly-Parallel CNN Accelerator for Edge Inference.** (NTHU)\n- **TensorDIMM: A Practical Near-Memory Processing Architecture for Embeddings and Tensor Operations in Deep Learning.** (KAIST)\n- **Understanding Reuse, Performance, and Hardware Cost of DNN Dataflows: A Data-Centric Approach.** (Georgia Tech, NVIDIA)\n- **MaxNVM: Maximizing DNN Storage Density and Inference Efficiency with Sparse Encoding and Error Mitigation.** (Harvard, Facebook)\n- **Neuron-Level Fuzzy Memoization in RNNs.** (UPC)\n- **Manna: An Accelerator for Memory-Augmented Neural Networks.** (Purdue, Intel)\n- eAP: A Scalable and Efficient In-Memory Accelerator for Automata Processing. (Virginia)\n- ComputeDRAM: In-Memory Compute Using Off-the-Shelf DRAMs. (Princeton)\n- ExTensor: An Accelerator for Sparse Tensor Algebra. (UIUC, NVIDIA)\n- Efficient SpMV Operation for Large and Highly Sparse Matrices Using Scalable Multi-Way Merge Parallelization. (CMU)\n- Sparse Tensor Core: Algorithm and Hardware Co-Design for Vector-wise Sparse Neural Networks on Modern GPUs. (UCSB, Alibaba)\n- DynaSprint: Microarchitectural Sprints with Dynamic Utility and Thermal Management. (Waterloo, ARM, Duke)\n- MEDAL: Scalable DIMM based Near Data Processing Accelerator for DNA Seeding Algorithm. (UCSB, ICT)\n- Tigris: Architecture and Algorithms for 3D Perception in Point Clouds. (Rochester)\n- ASV: Accelerated Stereo Vision System. (Rochester)\n- Alleviating Irregularity in Graph Analytics Acceleration: a Hardware\u002FSoftware Co-Design Approach. (UCSB, ICT)\n\n### 2019 ICCAD\n- **Zac: Towards Automatic Optimization and Deployment of Quantized Deep Neural Networks on Embedded Devices.** (PKU)\n- **NAIS: Neural Architecture and Implementation Search and its Applications in Autonomous Driving.** (UIUC)\n- **MAGNet: A Modular Accelerator Generator for Neural Networks.** (NVIDIA)\n- **ReDRAM: A Reconfigurable Processing-in-DRAM Platform for Accelerating Bulk Bit-Wise Operations.** (ASU)\n- **Accelergy: An Architecture-Level Energy Estimation Methodology for Accelerator Designs.** (MIT)\n\n### 2019 ASSCC\n- **A 47.4µJ\u002Fepoch Trainable Deep Convolutional Neural Network Accelerator for In-Situ Personalization on Smart Devices.** (KAIST)\n- **A 2.25 TOPS\u002FW Fully-Integrated Deep CNN Learning Processor with On-Chip Training.** (NTU)\n- **A Sparse-Adaptive CNN Processor with Area\u002FPerformance Balanced N-Way Set-Associate Pe Arrays Assisted by a Collision-Aware Scheduler.** (THU, Northeastern)\n- A 24 Kb Single-Well Mixed 3T Gain-Cell eDRAM with Body-Bias in 28 nm FD-SOI for Refresh-Free DSP Applications. (EPFL)\n\n### 2019 VLSI\n- **Area-Efficient and Variation-Tolerant In-Memory BNN Computing Using 6T SRAM Array.** (POSTECH)\n- **A 5.1pJ\u002FNeuron 127.3us\u002FInference RNN-Based Speech Recognition Processor Using 16 Computingin-Memory SRAM Macros in 65nm CMOS.** (THU, NTU, TsingMicro)\n- **A 0.11 pJ\u002FOp, 0.32-128 TOPS, Scalable, Multi-ChipModule-Based Deep Neural Network Accelerator with Ground-Reference Signaling in 16nm.** (NVIDIA)\n- **SNAP: A 1.67 – 21.55TOPS\u002FW Sparse Neural Acceleration Processor for Unstructured Sparse Deep Neural Network Inference in 16nm CMOS.** (UMich, NVIDA)\n- **A Full HD 60 fps CNN Super Resolution Processor with Selective Caching based Layer Fusion for Mobile Devices.** (KAIST)\n- **A 1.32 TOPS\u002FW Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture.** (KAIST)\n- Considerations of Integrating Computing-In-Memory and Processing-In-Sensorinto Convolutional Neural Network Accelerators for Low-Power Edge Devices. (NTU, NCHU)\n- Computational Memory-Based Inference and Training of Deep Neural Networks. (IBM, EPFL, ETHZ, et al)\n- A Ternary Based Bit Scalable, 8.80 TOPS\u002FW CNN A95Accelerator with Many-Core Processing-in-Memory Architecture with 896K Synapses\u002Fmm2. (Renesas)\n- In-Memory Reinforcement Learning with ModeratelyStochastic Conductance Switching of Ferroelectric Tunnel Junctions. (Toshiba)\n\n### 2019 HotChips\n- **MLPerf: A Benchmark Suite for Machine Learning from an Academic-Industry Cooperative.** (MLPerf)\n- **Zion: Facebook Next-Generation Large-memory Unified Training Platform.** (Facebook)\n- **A Scalable Unified Architecture for Neural Network Computing from Nano-Level to High Performance Computing.** (Huawei)\n- **Deep Learning Training at Scale – Spring Crest Deep Learning Accelerator.** (Intel)\n- **Spring Hill – Intel’s Data Center Inference Chip.** (Intel)\n- **Wafer Scale Deep Learning.** (Cerebras)\n- **Habana Labs Approach to Scaling AI Training.** (Habana)\n- **Ouroboros: A WaveNet Inference Engine for TTS Applications on Embedded Devices.** (Alibaba)\n- **A 0.11 pJ\u002FOp, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator Designed with a High-Productivity VLSI Methodology.** (NVIDIA)\n- **Xilinx Versal\u002FAI Engine.** (Xilinx)\n- A Programmable Embedded Microprocessor for Bit-scalable In-memory Computing. (Princeton)\n\n### 2019 FPGA\n- **Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs.** (THU, Berkeley, Politecnico di Torino, Xilinx)\n- **REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs.** (PKU, Northeastern）\n- **Reconfigurable Convolutional Kernels for Neural Networks on FPGAs.** (University of Kassel)\n- **Efficient and Effective Sparse LSTM on FPGA with Bank-Balanced Sparsity.** (Harbin Institute of Technology, Microsoft, THU, Beihang)\n- **Cloud-DNN: An Open Framework for Mapping DNN Models to Cloud FPGAs.** (Advanced Digital Sciences Center, UIUC)\n- F5-HD: Fast Flexible FPGA-based Framework for Refreshing Hyperdimensional Computing. (UCSD)\n- Xilinx Adaptive Compute Acceleration Platform: Versal Architecture. (Xilinx)\n\n### 2020 ISSCC\n- **A 3.4-to-13.3TOPS\u002FW 3.6TOPS Dual-Core Deep-Learning Accelerator for Versatile AI Applications in 7nm 5G Smartphone SoC.** (MediaTek)\n- **A 12nm Programmable Convolution-Efficient Neural-Processing-Unit Chip Achieving 825TOPS.** (Alibaba)\n- **STATICA: A 512-Spin 0.25M-Weight Full-Digital Annealing Processor with a Near-Memory All-SpinUpdates-at-Once Architecture for Combinatorial Optimization with Complete Spin-Spin Interactions.** (Tokyo Institute of Technology, Hokkaido Univ., Univ. of Tokyo)\n- **GANPU: A 135TFLOPS\u002FW Multi-DNN Training Processor for GANs with Speculative Dual-Sparsity Exploitation.** (KAIST)\n- **A 510nW 0.41V Low-Memory Low-Computation Keyword-Spotting Chip Using Serial FFT-Based MFCC and Binarized Depthwise Separable Convolutional Neural Network in 28nm CMOS.** (Southeast, EPFL, Columbia)\n- **A 65nm 24.7μJ\u002FFrame 12.3mW Activation-Similarity Aware Convolutional Neural Network Video Processor Using Hybrid Precision, Inter-Frame Data Reuse and Mixed-Bit-Width Difference-Frame Data Codec.** (THU)\n- **A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS\u002FW System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter\u002FIntra-Macro Data Reuse.** (THU, NTHU)\n- A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips. (NTU)\n- A 351TOPS\u002FW and 372.4GOPS Compute-in-Memory SRAM Macro in 7nm FinFET CMOS for Machine-Learning Applications. (TSMC)\n- A 22nm 2Mb ReRAM Compute-in-Memory Macro with 121-28TOPS\u002FW for Multibit MAC Computing for Tiny AI Edge Devices. (NTHU)\n- A 28nm 64Kb 6T SRAM Computing-in-Memory Macro with 8b MAC Operation for AI Edge Chips. (NTHU)\n- A 1.5μJ\u002FTask Path-Planning Processor for 2D\u002F3D Autonomous Navigation of Micro Robots. (NTHU)\n- A 65nm 8.79TOPS\u002FW 23.82mW Mixed-Signal Oscillator-Based NeuroSLAM Accelerator for Applications in Edge Robotics. (Georgia Tech)\n- CIM-Spin: A 0.5-to-1.2V Scalable Annealing Processor Using Digital Compute-In-Memory Spin Operators and Register-Based Spins for Combinatorial Optimization Problems. (NTU)\n- A Compute-Adaptive Elastic Clock-Chain Technique with Dynamic Timing Enhancement for 2D PE-Array-Based Accelerators. (Northwestern)\n- A 74 TMACS\u002FW CMOS-RRAM Neurosynaptic Core with Dynamically Reconfigurable Dataflow and In-situ Transposable Weights for Probabilistic Graphical Models. (Stanford, UCSD, THU, Notre Dame)\n- A Fully Integrated Analog ReRAM Based 78.4TOPS\u002FW Compute-In-Memory Chip with Fully Parallel MAC Computing. (THU, NTHU)\n\n### 2020 HPCA\n- **Deep Learning Acceleration with Neuron-to-Memory Transformation.**\t(UCSD)\n- **HyGCN: A GCN Accelerator with Hybrid Architecture.**\t(ICT, UCSB)\n- **SIGMA: A Sparse and Irregular GEMM Accelerator with Flexible Interconnects for DNN Training.**\t(Georgia Tech)\n- **PREMA: A Predictive Multi-task Scheduling Algorithm For Preemptible NPUs.**\t(KAIST)\n- **ALRESCHA: A Lightweight Reconfigurable Sparse-Computation Accelerator.**\t(Georgia Tech)\n- **SpArch: Efficient Architecture for Sparse Matrix Multiplication.**\t(MIT, NVIDIA)\n- **A3: Accelerating Attention Mechanisms in Neural Networks with Approximation.**\t(SNU)\n- **AccPar: Tensor Partitioning for Heterogeneous Deep Learning Accelerator Arrays.**\t(Duke, USC)\n- **PIXEL: Photonic Neural Network Accelerator.**\t(Ohio, George Washington)\n- **The Architectural Implications of Facebook’s DNN-based Personalized Recommendation.**\t(Facebook)\n- **Enabling Highly Efficient Capsule Networks Processing Through A PIM-Based Architecture Design.**\t(Houston)\n- **Missing the Forest for the Trees: End-to-End AI Application Performance in Edge Data.**\t(UT Austin, Intel)\n- **Communication Lower Bound in Convolution Accelerators.**\t(ICT, THU)\n- **Fulcrum: a Simplified Control and Access Mechanism toward Flexible and Practical in-situ Accelerators.**\t(Virginia, UCSB, Micron)\n- **EFLOPS: Algorithm and System Co-design for a High Performance Distributed Training Platform.**\t(Alibaba)\n- **Experiences with ML-Driven Design: A NoC Case Study.**\t(AMD)\n- **Tensaurus: A Versatile Accelerator for Mixed Sparse-Dense Tensor Computations.**\t(Cornell, Intel)\n- **A Hybrid Systolic-Dataflow Architecture for Inductive Matrix Algorithms.**\t(UCLA)\n- A Deep Reinforcement Learning Framework for Architectural Exploration: A Routerless NoC Case Study.\t(USC, OSU)\n- QuickNN: Memory and Performance Optimization of k-d Tree Based Nearest Neighbor Search for 3D Point Clouds.\t(Umich, General Motors)\n- Orbital Edge Computing: Machine Inference in Space.\t(CMU)\n- A Scalable and Efficient in-Memory Interconnect Architecture for Automata Processing.\t(Virginia)\n- Techniques for Reducing the Connected-Standby Energy Consumption of Mobile Devices.\t(ETHZ, Cyprus, CMU)\n\n### 2020 ASPLOS\n- **Shredder: Learning Noise Distributions to Protect Inference Privacy.**\t(UCSD)\n- **DNNGuard: An Elastic Heterogeneous DNN Accelerator Architecture against Adversarial Attacks.**\t(CAS, USC)\n- **Interstellar: Using Halide’s Scheduling Language to Analyze DNN Accelerators.**\t(Stanford, THU)\n- **DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints.**\t(UCSB)\n- **Prague: High-Performance Heterogeneity-Aware Asynchronous Decentralized Training.**\t(USC)\n- **PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning.**\t(College of William and Mary, Northeastern , USC)\n- **Capuchin: Tensor-based GPU Memory Management for Deep Learning.**\t(HUST, MSRA, USC)\n- **NeuMMU: Architectural Support for Efficient Address Translations in Neural Processing Units.**\t(KAIST)\n- **FlexTensor: An Automatic Schedule Exploration and Optimization Framework for Tensor Computation on Heterogeneous System.**\t(PKU)\n\n### 2020 DAC\n- **A Pragmatic Approach to On-device Incremental Learning System with Selective Weight Updates.**\n- **A Two-way SRAM Array based Accelerator for Deep Neural Network On-chip Training.**\n- **Algorithm-Hardware Co-Design for In-Memory Neural Network Computing with Minimal Peripheral Circuit Overhead.**\n- **Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference.**\n- **Hardware Acceleration of Graph Neural Networks.**\n- **Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training.**\n- **Low-Power Acceleration of Deep Neural Network Training Using Computational Storage Devices.**\n- **Prediction Confidence based Low Complexity Gradient Computation for Accelerating DNN Training.**\n- **SparseTrain: Exploiting Dataflow Sparsity for Efficient Convolutional Neural Networks Training.**\n- **SCA: A Secure CNN Accelerator for both Training and Inference.**\n- **STC: Significance-aware Transform-based Codec Framework for External Memory Access Reduction.**\n\n### 2020 FPGA\n- **AutoDNNchip: An Automated DNN Chip Generator through Compilation, Optimization, and Exploration.** （Rice, UIUC)\n- **Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms.** (USC)\n- Massively Simulating Adiabatic Bifurcations with FPGA to Solve Combinatorial Optimization. (Central Florida)\n\n### 2020 ISCA\n- **Data Compression Accelerator on IBM POWER9 and z15 Processors.** (IBM)\n- **High-Performance Deep-Learning Coprocessor Integrated into x86 SoC with Server-Class CPUs.**\t(Centaur )\n- **Think Fast: A Tensor Streaming Processor (TSP) for Accelerating Deep Learning Workloads.** (Groq)\n- **MLPerf Inference: A Benchmarking Methodology for Machine Learning Inference Systems.**\t\n- **A Multi-Neural Network Acceleration Architecture.** (SNU)\n- **SmartExchange: Trading Higher-Cost Memory Storage\u002FAccess for Lower-Cost Computation.** (Rice, TAMU, UCSB)\n- **Centaur: A Chiplet-Based, Hybrid Sparse-Dense Accelerator for Personalized Recommendations.** (KAIST)\n- **DeepRecSys: A System for Optimizing End-to-End At-Scale Neural Recommendation Inference.**\t(Facebook, Harvard)\n- **An In-Network Architecture for Accelerating Shared-Memory Multiprocessor Collectives.**\t(NVIDIA)\n- **DRQ: Dynamic Region-Based Quantization for Deep Neural Network Acceleration.**\t(SJTU)\n- The IBM z15 High Frequency Mainframe Branch Predictor. (ETHZ)\n- Déjà View: Spatio-Temporal Compute Reuse for Energy-Efficient 360° VR Video Streaming. (Penn State)\n- uGEMM: Unary Computing Architecture for GEMM Applications. (Wisconsin)\n- Gorgon: Accelerating Machine Learning from Relational Data. (Stanford)\n- RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing. (Facebook)\n- JPEG-ACT: Accelerating Deep Learning via Transform-Based Lossy Compression. (UBC)\n- Commutative Data Reordering: A New Technique to Reduce Data Movement Energy on Sparse Inference Workloads. (Sandia, Rochester)\n- Echo: Compiler-Based GPU Memory Footprint Reduction for LSTM RNN Training. (Toronto, Intel)\n\n### 2020 HotChips\n- **Google’s Training Chips Revealed: TPUv2 and TPUv3.** （Google)\n- **Software Co-design for the First Wafer-Scale Processor (and Beyond).** (Cerebras)\n- **Manticore: A 4096-core RISC-V Chiplet Architecture for Ultra-efficient Floating-point Computing.** (ETHZ)\n- **Baidu Kunlun – An AI Processor for Diversified Workloads.** (Baidu)\n- **Hanguang 800 NPU – The Ultimate AI Inference Solution for Data Centers.** (Alibaba)\n- **Silicon Photonics for Artificial Intelligence Acceleration.** (Lightmatter)\n- Xuantie-910: Innovating Cloud and Edge Computing by RISC-V. (Alibaba)\n- A Technical Overview of the ARM Cortex-M55 and Ethos-U55: ARM’s Most Capable Processors for Endpoint AI. (ARM)\n- PGMA: A Scalable Bayesian Inference Accelerator for Unsupervised Learning. (Harvard)\n\n### 2020 VLSI\n- **PNPU: A 146.52TOPS\u002FW Deep-Neural-Network Learning Processor with Stochastic Coarse-Fine Pruning and Adaptive Input\u002FOutput\u002FWeight Skipping.** (KAIST)\n- **A 3.0 TFLOPS 0.62V Scalable Processor Core for High Compute Utilization AI Training and Inference.** (IBM)\n- **A 617 TOPS\u002FW All Digital Binary Neural Network Accelerator in 10nm FinFET CMOS.** (Intel)\n- **An Ultra-Low Latency 7.8-13.6 pJ\u002Fb Reconfigurable Neural Network-Assisted Polar Decoder with Multi-Code Length Support.** (NTU)\n- **A 4.45ms Low-Latency 3D Point-Cloud-Based Neural Network Processor for Hand Pose Estimation in Immersive Wearable Devices.** (KAIST)\n- **A 3mm2 Programmable Bayesian Inference Accelerator for Unsupervised Machine Perception Using Parallel Gibbs Sampling in 16nm.** (Harvard)\n- 1.03pW\u002Fb Ultra-Low Leakage Voltage-Stacked SRAM for Intelligent Edge Processors. (Umich)\n- Z-PIM: An Energy-Efficient Sparsity-Aware Processing-In-Memory Architecture with Fully-Variable Weight Precision. (KAIST)\n\n### 2020 MICRO\n- **SuperNPU: An Extremely Fast Neural Processing Unit Using Superconducting Logic Devices.** (Kyushu University）\n- **Printed Machine Learning Classifiers.** (UIUC, KIT）\n- **Look-Up Table based Energy Efficient Processing in Cache Support for Neural Network Acceleration.** (PSU, Intel)\n- **FReaC Cache: Folded-Logic Reconfigurable Computing in the Last Level Cache.** (UIUC, IBM)\n- **Newton: A DRAM-Maker's Accelerator-in-Memory (AiM) Architecture for Machine Learning.** (Purdue, SK Hynix)\n- **VR-DANN: Real-Time Video Recognition via Decoder-Assisted Neural Network Acceleration.** (SJTU)\n- **Procrustes: A Dataflow and Accelerator for Sparse Deep Neural Network Training.** (University of British Columbia, Microsoft)\n- **Duplo: Lifting Redundant Memory Accesses of Deep Neural Networks for GPU Tensor Cores.** (Yonsei University, EcoCloud, EPFL)\n- **DUET: Boosting Deep Neural Network Efficiency on Dual-Module Architecture.** (UCSB, Alibaba)\n- **ConfuciuX: Autonomous Hardware Resource Assignment for DNN Accelerators using Reinforcement Learning.** (GaTech)\n- **Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks.** (UCSD, Bigstream, Kansas, NVIDIA, Google)\n- **TFE: Energy-Efficient Transferred Filter-Based Engine to Compress and Accelerate Convolutional Neural Networks.** (THU, Alibaba)\n- **MatRaptor: A Sparse-Sparse Matrix Multiplication Accelerator Based on Row-Wise Product.** (Cornell)\n- **TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training.** (Toronto)\n- **SAVE: Sparsity-Aware Vector Engine for Accelerating DNN Training and Inference on CPUs.** (UIUC)\n- **GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference.** (Toronto)\n- **TrainBox: An Extreme-Scale Neural Network Training Server Architecture by Systematically Balancing Operations.** (SNU)\n- **AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing.** (Boston et al.)\n- **Mesorasi: Architecture Support for Point Cloud Analytics via Delayed-Aggregation.** (Rochestor, ARM)\n- **NCPU: An Embedded Neural CPU Architecture on Resource-Constrained Low Power Devices for Real-Time End-to-End Performance.** (Northwestern University)\n- FlexWatts: A Power- and Workload-Aware Hybrid Power Delivery Network for Energy-Efficient Microprocessors.\t(ETHZ, Intel, Technion, NTU)\n- AutoScale: Energy Efficiency Optimization for Stochastic Edge Inference Using Reinforcement Learning.\t(Facebook)\n- CATCAM: Constant-time Alteration Ternary CAM with Scalable In-Memory Architecture.\t(THU, Southeast University)\n- DUAL: Acceleration of Clustering Algorithms using Digital-Based Processing In-Memory.\t(UCSD)\n- Bit-Exact ECC Recovery (BEER): Determining DRAM On-Die ECC Functions by Exploiting DRAM Data Retention Characteristics.\t(ETHZ)\n\n### 2020 ICCAD\n- ReTransformer: ReRAM-based Processing-in-Memory Architecture for Transformer Acceleration.\t(Duke)\n- Energy-efficient XNOR-free In-Memory BNN Accelerator with Input Distribution Regularization.\t(POSTECH)\n- HyperTune: Dynamic Hyperparameter Tuning for Efficient Distribution of DNN Training Over Heterogeneous Systems.\t(UCI, NGD)\n- SynergicLearning: Neural Network-Based Feature Extraction for Highly-Accurate Hyperdimensional Learning.\t(USC)\n- Optimizing Stochastic Computing for Low Latency Inference of Convolutional Neural Networks.\t(Nanjing University)\n- HAPI: Hardware-Aware Progressive Inference.\t(Samsung)\n- MobiLattice: A Depth-wise DCNN Accelerator with Hybrid Digital\u002FAnalog Nonvolatile Processing-In-Memory Block.\t(PKU, Duke)\n- A Many-Core Accelerator Design for On-Chip Deep Reinforcement Learning.\t(ICT)\n- DRAMA: An Approximate DRAM Architecture for High-performance and Energy-efficient Deep Training System.\t(Kyung Hee Univ., NUS)\n- FPGA-based Low-Batch Training Accelerator for Modern CNNs Featuring High Bandwidth Memory.\t(ASU, Intel)\n\n### 2021 ISSCC\n- **The A100 Datacenter GPU and Ampere Architecture.** (NVIDIA）\n- **Kunlun: A 14nm High-Performance AI Processor for Diversified Workloads.** (Baidu）\n- **A 12nm Autonomous-Driving Processor with 60.4TOPS, 13.8TOPS\u002FW CNN Executed by Task-Separated ASIL D Control.** (Renesas）\n- **BioAIP: A Reconfigurable Biomedical AI Processor with Adaptive Learning for Versatile Intelligent Health Monitoring.** (UESTC）\n- **A 0.2-to-3.6TOPS\u002FW Programmable Convolutional Imager SoC with In-Sensor Current-Domain Ternary-Weighted MAC Operations for Feature Extraction and Region-of-Interest Detection.** (Leuven）\n- **A 7nm 4-Core AI Chip with 25.6TFLOPS Hybrid FP8 Training, 102.4TOPS INT4 Inference and Workload-Aware Throttling.** (IBM）\n- **A 28nm 12.1TOPS\u002FW Dual-Mode CNN Processor Using Effective-Weight-Based Convolution and Error-Compensation-Based Prediction.** (THU）\n- **A 40nm 4.81TFLOPS\u002FW 8b Floating-Point Training Processor for Non-Sparse Neural Networks Using Shared Exponent Bias and 24-Way Fused Multiply-Add Tree.** (SNU）\n- **PIU: A 248GOPS\u002FW Stream-Based Processor for Irregular Probabilistic Inference Networks Using Precision-Scalable Posit Arithmetic in 28nm.** (Leuven）\n- **A 6K-MAC Feature-Map-Sparsity-Aware Neural Processing Unit in 5nm Flagship Mobile SoC.** (Samsung）\n- **A 1\u002F2.3inch 12.3Mpixel with On-Chip 4.97TOPS\u002FW CNN Processor Back-Illuminated Stacked CMOS Image Sensor.** (Sony）\n- **A 184μW Real-Time Hand-Gesture Recognition System with Hybrid Tiny Classifiers for Smart Wearable Devices.** (Nanyang）\n- **A 25mm2 SoC for IoT Devices with 18ms Noise-Robust Speech-to-Text Latency via Bayesian Speech Denoising and Attention-Based Sequence-to-Sequence DNN Speech Recognition in 16nm FinFET.** (Harvard, Tufts, ARM, Cornell）\n- **A Background-Noise and Process-Variation-Tolerant 109nW Acoustic Feature Extractor Based on Spike-Domain Divisive-Energy Normalization for an Always-On Keyword Spotting Device.** (Columnbia）\n- A 148nW General-Purpose Event-Driven Intelligent Wake-Up Chip for AIoT Devices Using Asynchronous Spike-Based Feature Extractor and Convolutional Neural Network. (PKU）\n- A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing. (Princeton）\n- A 2.75-to-75.9TOPS\u002FW Computing-in-Memory NN Processor Supporting Set-Associate Block-Wise Zero Skipping and Ping-Pong CIM with Simultaneous Computation and Weight Updating. (THU）\n- A 65nm 3T Dynamic Analog RAM-Based Computing-in-Memory Macro and CNN Accelerator with Retention Enhancement, Adaptive Analog Sparsity and 44TOPS\u002FW System Energy Efficiency. (Northwestern）\n- A 5.99-to-691.1TOPS\u002FW Tensor-Train In-Memory-Computing Processor Using Bit-Level-SparsityBased Optimization and Variable-Precision Quantization. (THU, UESTC, NTHU）\n- A 22nm 4Mb 8b-Precision ReRAM Computing-in-Memory Macro with 11.91 to 195.7TOPS\u002FW for Tiny AI Edge Devices. (NTHU, TSMC）\n- eDRAM-CIM: Compute-In-Memory Design with Reconfigurable Embedded-Dynamic-Memory Array Realizing Adaptive Data Converters and Charge-Domain Computing. (UT Austin, Intel）\n- A 28nm 384kb 6T-SRAM Computation-in-Memory Macro with 8b of Precision for AI Edge Chips. (NTHU, Industrial Technology Research Institute, TSMC）\n- An 89TOPS\u002FW and 16.3TOPS\u002Fmm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications. (TSMC）\n- A 20nm 6GB Function-In-Memory DRAM, Based on HBM2 with a 1.2TFLOPS Programmable Computing Unit Using Bank-Level Parallelism, for Machine Learning Applications. (Samsung）\n- A 21×21 Dynamic-Precision Bit-Serial Computing Graph Accelerator for Solving Partial Differential Equations Using Finite Difference Method. (Nanyang）\n\n### 2021 ASPLOS\n- **Exploiting Gustavson's Algorithm to Accelerate Sparse Matrix Multiplication.**\t(MIT, NVIDIA)\n- **SIMDRAM: A Framework for Bit-Serial SIMD Processing using DRAM.**\t(ETHZ, CMU)\n- **RecSSD: Near Data Processing for Solid State Drive Based Recommendation Inference.**\t(Harvard, Facebook, ASU)\n- DiAG: A Dataflow-inspired Architecture for General-purpose Processors.\t(UIUC)\n- Field-Configurable Multi-resolution Inference: Rethinking Quantization.\t(Harvard, Franklin & Marshall College)\n- Defensive Approximation: Securing CNNs using Approximate Computing.\t(University of Sfax et al.)\n\n### 2021 HPCA\n- **A Computational Stack for Cross-Domain Acceleration.**\t(UCSD et al.)\n- **Heterogeneous Dataflow Accelerators for Multi-DNN Workloads.**\t(GaTech, Facebook, NVIDIA)\n- **SPAGHETTI: Streaming Accelerators for Highly Sparse GEMM on FPGAs.**\t(SFU et al.)\n- **SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning.**\t(MIT)\n- **Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework.**\t(Northeastern et al.)\n- **Tensor Casting: Co-Designing Algorithm-Architecture for Personalized Recommendation Training.**\t(KAIST)\n- **GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent.**\t(SNU, Yonsei)\n- **SpaceA: Sparse Matrix Vector Multiplication on Processing-in-Memory Accelerator.**\t(UCSB, PKU)\n- **Layerweaver: Maximizing Resource Utilization of Neural Processing Units via Layer-Wise Scheduling.**\t(Sungkyunkwan, SNU)\n- **Efficient Tensor Migration and Allocation on Heterogeneous Memory Systems for Deep Learning.**\t(UCM, Microsoft)\n- **CSCNN: Algorithm-hardware Co-design for CNN Accelerators using Centrosymmetric Filters.**\t(GWU, Ohio)\n- **Adapt-NoC: A Flexible Network-on-Chip Design for Heterogeneous Manycore Architectures.**\t(GWU)\n- **GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks.**\t(GWU, Ohio)\n- **Ascend: a Scalable and Unified Architecture for Ubiquitous Deep Neural Network Computing.**\t(Huawei)\n- **Understanding Training Efficiency of Deep Learning Recommendation Models at Scale.**\t(Facebook)\n- **Eudoxus: Characterizing and Accelerating Localization in Autonomous Machines.**\t(Rochester et al.)\n- **NeuroMeter: An Integrated Power, Area, and Timing Modeling Framework for Machine Learning Accelerators.**\t(UCSB. Google)\n- **Chasing Carbon: The Elusive Environmental Footprint of Computing.**\t(Harvard, Facebook)\n- **FuseKNA: Fused Kernel Convolution based Accelerator for Deep Neural Networks.**\t(THU)\n- **FAFNIR: Accelerating Sparse Gathering by Using Efficient Near-Memory Intelligent Reduction.**\t(GaTech)\n- **VIA: A Smart Scratchpad for Vector Units With Application to Sparse Matrix Computations.**\t(Barcelona Supercomputing Center et al.)\n- Cheetah: Optimizing and Accelerating Homomorphic Encryption for Private Inference.\t(NYU, SNU, Harvard, Facebook)\n- CAPE: A Content-Addressable Processing Engine.\t(Cornell, PSU)\n- Prodigy: Improving the Memory Latency of Data-Indirect Irregular Workloads Using Hardware-Software Co-Design.\t(Umich et al.)\n- BRIM: Bistable Resistively-Coupled Ising Machine.\t(Rochester)\n- An Analog Preconditioner for Solving Linear Systems.\t(Sandia et al.)\n\n### 2021 ISCA\n- Ten Lessons From Three Generations Shaped Google's TPUv4i (Google)\n- Sparsity-Aware and Re-Configurable NPU Architecture for Samsung Flagship Mobile SoC (Samsung)\n- Energy Efficiency Boost in the AI-Infused POWER10 Processor (IBM)\n- Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology (Samsung)\n- Pioneering Chiplet Technology and Design for the AMD EPYC™ and Ryzen™ Processor Families (AMD)\n- RaPiD: AI Accelerator for Ultra-Low Precision Training and Inference (IBM)\n- REDUCT: Keep It Close, Keep It Cool! - Scaling DNN Inference on Multi-Core CPUs with Near-Cache Compute (Intel)\n- Communication Algorithm-Architecture Co-Design for Distributed Deep Learning (UCSB, TAMU)\n- ABC-DIMM: Alleviating the Bottleneck of Communication in DIMM-Based Near-Memory Processing with Inter-DIMM Broadcast (THU)\n- Sieve: Scalable In-Situ DRAM-Based Accelerator Designs for Massively Parallel k-mer Matching (Virginia)\n- FORMS: Fine-Grained Polarized ReRAM-Based In-Situ Computation for Mixed-Signal DNN Accelerator (Northeastern et al)\n- BOSS: Bandwidth-Optimized Search Accelerator for Storage-Class Memory (SNU)\n- Accelerated Seeding for Genome Sequence Alignment with Enumerated Radix Trees (Umich)\n- Aurochs: An Architecture for Dataflow Threads (Stanford)\n- PipeZK: Accelerating Zero-Knowledge Proof with a Pipelined Architecture (PKU et al)\n- CODIC: A Low-Cost Substrate for Enabling Custom In-DRAM Functionalities and Optimizations (ETHZ)\n- Enabling Compute-Communication Overlap in Distributed Deep Learning Training Platforms (GaTech)\n- CoSA: Scheduling by Constrained Optimization for Spatial Accelerators (Berkeley)\n- η-LSTM: Co-Designing Highly-Efficient Large LSTM Training via Exploiting Memory-Saving and Architectural Design Opportunities (Washington et al)\n- FlexMiner: A Pattern-Aware Accelerator for Graph Pattern Mining (MIT)\n- PolyGraph: Exposing the Value of Flexibility for Graph Processing Accelerators (UCLA)\n- Large-Scale Graph Processing on FPGAs with Caches for Thousands of Simultaneous Misses (EPFL)\n- SPACE: Locality-Aware Processing in Heterogeneous Memory for Personalized Recommendations (Yonsei)\n- ELSA: Hardware-Software Co-Design for Efficient, Lightweight Self-Attention Mechanism in Neural Networks (SNU)\n- Cambricon-Q: A Hybrid Architecture for Efficient Training (CAS)\n- TENET: A Framework for Modeling Tensor Dataflow Based on Relation-Centric Notation (PKU et al)\n- NASGuard: A Novel Accelerator Architecture for Robust Neural Architecture Search (NAS) Networks (CAS)\n- NASA: Accelerating Neural Network Design with a NAS Processor (CAS)\n- Albireo: Energy-Efficient Acceleration of Convolutional Neural Networks via Silicon Photonics (Ohio et al)\n- QUAC-TRNG: High-Throughput True Random Number Generation Using Quadruple Row Activation in Commodity DRAM Chips (ETHZ)\n- NN-Baton: DNN Workload Orchestration and Chiplet Granularity Exploration for Multichip Accelerators (THU)\n- SNAFU: An Ultra-Low-Power, Energy-Minimal CGRA-Generation Framework and Architecture (CMU)\n- SARA: Scaling a Reconfigurable Dataflow Accelerator (Stanford)\n- HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation (PKU et al)\n- SpZip: Architectural Support for Effective Data Compression In Irregular Applications (MIT)\n- Dual-Side Sparse Tensor Core (Microsoft）\n- RingCNN: Exploiting Algebraically-Sparse Ring Tensors for Energy-Efficient CNN-Based Computational Imaging (NTHU)\n- GoSPA: An Energy-Efficient High-Performance Globally Optimized SParse Convolutional Neural Network Accelerator (Rutgers)\n\n### 2021 VLSI\n- MN-Core - A Highly Efficient and Scalable Approach to Deep Learning (Preferred Networks)\n- CHIMERA: A 0.92 TOPS, 2.2 TOPS\u002FW Edge AI Accelerator with 2 MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference\t(Standford, TSMC)\n- OmniDRL: A 29.3 TFLOPS\u002FW Deep Reinforcement Learning Processor with Dual-Mode Weight Compression and On-Chip Sparse Weight Transposer\t(KAIST)\n- DepFiN: A 12nm, 3.8TOPs Depth-First CNN Processor for High Res. Image Processing\t(Leuven)\n- PNNPU: A 11.9 TOPS\u002FW High-Speed 3D Point Cloud-Based Neural Network Processor with Block-Based Point Processing for Regular DRAM Access\t(KAIST)\n- A 28nm 276.55TFLOPS\u002FW Sparse Deep-Neural-Network Training Processor with Implicit Redundancy Speculation and Batch Normalization Reformulation\t(THU)\n- A 13.7 TFLOPSW Floating-point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory\t(KAIST)\n- PIMCA: A 3.4-Mb Programmable In-Memory Computing Accelerator in 28nm for On-Chip DNN Inference\t(ASU)\n- A 6.54-to-26.03 TOPS\u002FW Computing-In-Memory RNN Processor Using Input Similarity Optimization and Attention-Based Context-Breaking with Output Speculation\t(THU, NTHU)\n- Fully Row\u002FColumn-Parallel In-Memory Computing SRAM Macro Employing Capacitor-Based Mixed-Signal Computation with 5-b Inputs\t(Princeton)\n- HERMES Core – A 14nm CMOS and PCM-Based In-Memory Compute Core Using an Array of 300ps\u002FLSB Linearized CCO-Based ADCs and Local Digital Processing\t(IBM)\n- A 20x28 Spins Hybrid In-Memory Annealing Computer Featuring Voltage-Mode Analog Spin Operator for Solving Combinatorial Optimization Problems\t(NTU, UCSB)\n- Analog In-Memory Computing in FeFET-Based 1T1R Array for Edge AI Applications\t(Sony)\n- Energy-Efficient Reliable HZO FeFET Computation-in-Memory with Local Multiply & Global Accumulate Array for Source-Follower & Charge-Sharing Voltage Sensing\t(Tokyo)\n\n### 2021 ICCAD\n- Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator\t(SJTU)\n- Crossbar based Processing in Memory Accelerator Architecture for Graph Convolutional Networks\t(PSU, IBM)\n- REREC: In-ReRAM Acceleration with Access-Aware Mapping for Personalized Recommendation\t(Duke, THU)\n- A Framework for Area-efficient Multi-task BERT Execution on ReRAM-based Accelerators\t(KAIST)\n- A Convergence Monitoring Method for DNN Training of On-Device Task Adaptation\t(KAIST)\n\n### 2021 HotChips\n- Accelerating ML Recommendation with over a Thousand RISC-V\u002FTensor Processors on Esperanto’s ET-SoC-1 Chip (Esperanto Technologies)\n- AI Compute Chip from Enflame\t(Enflame Technology)\n- Qualcomm Cloud AI 100: 12 TOPs\u002FW Scalable, High Performance and Low Latency Deep Learning Inference Accelerator\t(Qualcomm)\n- Graphcore Colossus Mk2 IPU\t(Graphcore)\n- The Multi-Million Core, Multi-Wafer AI Cluster\t(Cerebras)\n- SambaNova SN10 RDU: Accelerating Software 2.0 with Dataflow\t(SambaNova)\n\n### 2021 MICRO\n- RACER: Bit-Pipelined Processing Using Resistive Memory\t(CMU, UIUC)\n- AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning\t(Soongsil, ASU)\n- DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware\t(USC)\n- 2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency\t(Rice)\n- F1: A Fast and Programmable Accelerator for Fully Homomorphic Encryption\t(MIT, Umich)\n- Equinox: Training (for Free) on a Custom Inference Accelerator\t(EPFL)\n- PointAcc: Efficient Point Cloud Accelerator\t(MIT)\n- Noema: Hardware-Efficient Template Matching for Neural Population Pattern Detection\t(Toronto, NeuroTek)\n- SquiggleFilter: An Accelerator for Portable Virus Detection\t(Umich)\n- EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference\t(Harvard et al.)\n- HiMA: A Fast and Scalable History-Based Memory Access Engine for Differentiable Neural Computer\t(Umich)\n- FPRaker: A Processing Element for Accelerating Neural Network Training\t(Toronto)\n- RecPipe: Co-Designing Models and Hardware to Jointly Optimize Recommendation Quality and Performance\t(Harvard, Facebook)\n- Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network Training via Memory-Friendly Pattern Retrieving\t(Houston et al.)\n- Distilling Bit-Level Sparsity Parallelism for General Purpose Deep Learning Acceleration\t(ICT, UESTC)\n- Sanger: A Co-Design Framework for Enabling Sparse Attention using Reconfigurable Architecture\t(PKU)\n- ESCALATE: Boosting the Efficiency of Sparse CNN Accelerator with Kernel Decomposition\t(Duke, USC)\n- SparseAdapt: Runtime Control for Sparse Linear Algebra on a Reconfigurable Accelerator\t(Umich et al.)\n- Capstan: A Vector RDA for Sparsity\t(Stanford, SambaNova)\n- I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement Through Islandization\t(PNNL et al.)\n\n### 2021 DAC\n- MAT: Processing In-Memory Acceleration for Long-Sequence Attention\n- PIM-Quantifier: A Processing-in-Memory Platform for mRNA Quantification\n- Network-on-Interposer Design for Agile Neural-Network Processor Chip Customization\n- GCiM: A Near-Data Processing Accelerator for Graph Construction\n- An Intelligent Video Processing Architecture for Edge-cloud Video Streaming\n- Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration\n- PixelSieve: Towards Efficient Activity Analysis From Compressed Video Streams\n- TensorLib: A Spatial Accelerator Generation Framework for Tensor Algebra\n- Scaling Deep-Learning Inference with Chiplet-based Architecture and Photonic Interconnects\n- Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices\n- Designing a 2048-Chiplet, 14336-Core Waferscale Processor\n- Accelerating Fully Homomorphic Encryption with Processing in Memory\n\n### 2022 ISSCC\n- A 512Gb In-Memory-Computing 3D NAND Flash Supporting Similar Vector Matching Operations on AI Edge Devices\n- A 1ynm 1.25V 8Gb, 16Gb\u002Fs\u002Fpin GDDR6-Based Accelerator-In-Memory Supporting 1TFLOPS MAC Operation and Various Activation Functions for Deep Learning Applications\n- A 22nm 4Mb STT-MRAM data-encrypted Near-Memory-Computation Macro with 192GB\u002Fs Read-and-Decryption Bandwidth and 25.1-55.1 TOPS\u002FW at 8b MAC for AI-oriented Operations\n- A 40nm 2M-cell 8b-Precision Hybrid SLC-MLC PCM Computing-in-Memory Macro with 20.5-65.0 TOPS\u002FW for Tiny AI Edge Devices\n- An 8Mb DC-Current-Free Binary-to-8b Precision ReRAM Nonvolatile Computing-in-Memory Macro using Time-Space-Readout with 1286.4 TOPS\u002FW - 21.6 TOPS\u002FW for AI Edge Devices\n- Single-Mode 6T CMOS SRAM Macros with Keeper-Loading-Free Peripherals and Row-Separate Dynamic Body Bias Achieving 2.53fW\u002Fbit Leakage for AIoT Sensing Platforms\n- A 5 nm 254 TOPS\u002FW and 221 TOPS\u002Fmm2 Fully Digital Computing-in-Memory Supporting Wide Range Dynamic-Voltage-Frequency Scaling and Simultaneous MAC and Write Operations\n- A 1.041Mb\u002Fmm2 27.38TOPS\u002FW Signed-INT8 Dynamic Logic Based ADC-Less SRAM ComputeIn-Memory Macro in 28nm with Reconfigurable Bitwise Operation for AI and Embedd Applications\n- A 28nm 1Mb Time-Domain 6T SRAM Computing-in-Memory Macro with 6.6ns Latency 1241 GOPS and 37.01 TOPS\u002FW for 8b-MAC Operations for AI Edge Devices\n- A Multi-Mode 8K-MAC HW-Utilization-Aware Neural Processing Unit with a Unified Multi-Precision Datapath in 4nm Flagship Mobile SoC\n- A 65nm Systolic Neural CPU Processor for Combined Deep Learning and General-Purpose Computing with 95% PE Utilization, High Data Locality and Enhanced Endto-End Performance\n- COMB-MCM: Computing-on-Memory-Boundary NN Processor with Bipolar Bitwise Sparsity Optimization for Scalable Multi-Chiplet-Module Edge Machine Learning\n- Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS\u002FW for CIFAR-100 and ImageNet\n- A 28nm 29.2TFLOPS\u002FW BF16 and 36.5TOPS\u002FW INT8 Reconfigurable Digital CIM Processor with Unified FP\u002FINT Pipeline and Bitwise In-Memory Booth Multiplication for Cloud Deep Learning Acceleration\n- DIANA: An End-to-End Energy-Efficient DIgital and ANAlog Hybrid Neural Network SoC\n- ARCHON: A 332.7TOPS\u002FW 5b Variation-Tolerant Analog CNN Processor Featuring Analog Neuronal Computation Unit and Analog Memory\n- Analog Matrix Processor for Edge AI Real-Time Video Analytics\n- A 0.8V Intelligent Vision Sensor with Tiny Convolutional Neural Network and Programmable Weights Using Mixed-Mode Processing-in-Sensor Technique for Image Classification\n- 184QPS\u002FW 64Mb\u002Fmm2 3D Logic-to-DRAM Hybrid Bonding with Process-Near-Memory Engine for Recommendation System\n- A 28nm 27.5TOPS\u002FW Approximate-Computing-Based Transformer Processor with Asymptotic Sparsity Speculating and Out-of-Order Computing\n- A 28nm 15.59μJ\u002FToken Full-Digital Bitline-Transpose CIM-Based Sparse Transformer Accelerator with Pipeline\u002FParallel Reconfigurable Modes\n- ReckOn: A 28nm Sub-mm2 Task-Agnostic Spiking Recurrent Neural Network Processor Enabling On-Chip Learning over Second-Long Timescales\n\n### 2022 HPCA\n- LISA: Graph Neural Network based Portable Mapping on Spatial Accelerators \n- Upward Packet Popup for Deadlock Freedom in Modular Chiplet-Based Systems\n- FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding\n- TransPIM: A Memory-based Acceleration via Software-Hardware Co-Design for Transformer\n- An Optimization Framework for Mapping Multiple DNNs on Multiple Accelerator Cores\n- ScalaGraph: A Scalable Accelerator for Massively Parallel Graph Processing\n- PIMCloud: QoS-Aware Resource Management of Latency-Critical Applications in Clouds with Processing-in-Memory\n- ANNA: Specialized Architecture for Approximate Nearest Neighbor Search\n- Enabling Efficient Large-Scale Deep Learning Training with Cache Coherent Disaggregated Memory Systems\n- NeuroSync: A Scalable and Accurate Brain Simulation System using Safe and Efficient Speculation\n- Enabling High-Quality Uncertainty Quantification in a PIM Designed for Bayesian Neural Network\n- Griffin: Rethinking Sparse Optimization for Deep Learning Architectures\n- CANDLES: Channel-Aware Novel Dataflow-Microarchitecture Co-Design for Low Energy Sparse Neural Network Acceleration\n- SPACX: Silicon Photonics-based Scalable Chiplet Accelerator for DNN Inference\n- RM-SSD: In-Storage Computing for Large-Scale Recommendation Inference\n- CAMA: Energy and Memory Efficient Automata Processing in Content-Addressable Memories\n- TNPU: Supporting Trusted Execution with Tree-less Integrity Protection for Neural Processing Unit\n- S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration\n- Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures\n- Atomic Dataflow based Graph-Level Workload Orchestration for Scalable DNN Accelerators\n- SecNDP: Secure Near-Data Processing with Untrusted Memory\n- Direct Spatial Implementation of Sparse Matrix Multipliers for Reservoir Computing\n- Hercules: Heterogeneity-aware Inference Serving for At-scale Personalized Recommendation\n- ReGNN: A Redundancy-Eliminated Graph Neural Networks Accelerator\n- Parallel Time Batching: Systolic-Array Acceleration of Sparse Spiking Neural Computation\n- GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design\n- CoopMC: Algorithm-Architecture Co-Optimization for Markov Chain Monte Carlo Accelerators\n- Application Defined On-chip Networks for Heterogeneous Chiplets: An Implementation Perspective\n- The Specialized High-Performance Network on Anton 3\n- DarkGates: A Hybrid Power-gating Architecture to Mitigate Dark Sides of Dark-Silicon in High Performance Processors\n\n### 2022 ASPLOS\n- DOTA: Detect and Omit Weak Attentions for Scalable Transformer Acceleration \n- A Full-stack Search Technique for Domain Optimized Deep Learning Accelerators\n- FINGERS: Exploiting Fine-Grained Parallelism in Graph Mining Accelerators\n- BiSon-e: A Lightweight and High-Performance Accelerator for Narrow Integer Linear Algebra Computing on the Edge\n- RecShard: Statistical Feature-Based Memory Optimization for Industry-Scale Neural Recommendation\n- AStitch: Enabling A New Multi-Dimensional Optimization Space for Memory-Intensive ML Training and Inference on Modern SIMT Architectures\n- NASPipe: High Performance and Reproducible Pipeline Parallel Supernet Training via Causal Synchronous Parallelism\n- VELTAIR: Towards High-Performance Multi-Tenant Deep Learning Services via Adaptive Compilation and Scheduling\n- Breaking the Computation and Communication Abstraction Barrier in Distributed Machine Learning Workloads\n- GenStore: An In-storage Processing System for Genome Sequence Analysis\n- ProSE: The Architecture and Design of a Protein Discovery Engine\n- REVAMP: A Systematic Framework for Heterogeneous CGRA Realization\n- Invisible Bits: Hiding Secret Messages in SRAM’s Analog Domain\n\n### 2022 ISCA\n- TDGraph: A Topology-Driven Accelerator for High-Performance Streaming Graph Processing\n- DIMMining: Pruning-Efficient and Parallel Graph Mining on DIMM-based Near-Memory-Computing\n- NDMiner: Accelerating Graph Pattern Mining Using Near Data Processing \n- SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures \n- Hyperscale FPGA-As-A-Service Architecture for Large-Scale Distributed Graph Neural Network\n- Crescent: Taming Memory Irregularities for Accelerating Deep Point Cloud Analytics \n- The Mozart Reuse Exposed Dataflow Processor for AI and Beyond \n- Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models\n- AI Accelerator on IBM Telum Processor\n- Understanding Data Storage and Ingestion for Large-Scale Deep Recommendation Model\n- Cascading Structured Pruning: Enabling High Data Reuse for Sparse DNN Accelerators \n- Anticipating and Eliminating Redundant Computations in Accelerated Sparse Training \n- SIMD^2: A Generalized Matrix Instruction Set for Accelerating Tensor Computation beyond GEMM\n- A Software-defined Tensor Streaming Multiprocessor for Large-Scale Machine Learning \n- A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models \n- Increasing Ising Machine Capacity with Multi-Chip Architectures \n- Training Personalized Recommendation Systems from (GPU) Scratch: Look Forward not Backwards \n- AMOS: Enabling Automatic Mapping for Tensor Computations On Spatial Accelerators with Hardware Abstraction\n- Mokey: Enabling Narrow Fixed-Point Inference for Out-of-the-Box Floating-Point Transformer Models \n- Accelerating Attention through Gradient-Based Learned Runtime Pruning \n\n### 2022 HotChips\n- Groq Software-Defined Scale-out Tensor Streaming Multi-Processor\n- Boqueria: A 2 PetaFLOPs, 30 TeraFLOPs\u002FW At-Memory Inference Acceleration Device with 1,456 RISC-V Cores\n- DOJO: The Microarchitecture of Tesla's Exa-Scale Computer\n- DOJO - Super-Compute System Scaling for ML Training\n- Cerebras Architecture Deep Dive: First Look Inside the HW\u002FSW Co-Design for Deep Learning\n\n### 2022 MICRO\n- Cambricon-P: A Bitflow Architecture for Arbitrary Precision Computing\n- OverGen: Improving FPGA Usability Through Domain-specific Overlay Generation\n- big.VLITTLE: On-Demand Data-Parallel Acceleration for Mobile Systems on Chip \n- Pushing Point Cloud Compression to Edge \n- ROG: A High Performance and Robust Distributed Training System for Robotic IoT \n- Automatic Domain-Specific SoC Design for Autonomous Unmanned Aerial Vehicles \n- GCD2: A Globally Optimizing Compiler for Mapping DNNs to Mobile DSPs\n- Skipper: Enabling Efficient SNN Training Through Activation-Checkpointing and Time-Skipping \n- Going Further With Winograd Convolutions: Tap-Wise Quantization for Efficient Inference on 4x4 Tiles\n- HARMONY: Heterogeneity-Aware Hierarchical Management for Federated Learning System \n- Adaptable Butterfly Accelerator for Attention-Based NNs via Hardware and Algorithm Co-Design \n- DFX: A Low-Latency Multi-FPGA Appliance for Accelerating Transformer-Based Text Generation\n- GenPIP: In-Memory Acceleration of Genome Analysis by Tight Integration of Basecalling and Read Mapping \n- BEACON: Scalable Near-Data-Processing Accelerators for Genome Analysis near Memory Pool with the CXL Support\n- ICE: An Intelligent Cognition Engine with 3D NAND-based In-Memory Computing for Vector Similarity Search Acceleration \n- Sparse Attention Acceleration with Synergistic In-Memory Pruning and On-Chip Recomputation \n- FracDRAM: Fractional Values in Off-the-Shelf DRAM\n- pLUTo: Enabling Massively Parallel Computation in DRAM via Lookup Tables \n- Multi-Layer In-Memory Processing \n- Flash-Cosmos: In-Flash Bitwise Operations Using Inherent Computation Capability of NAND Flash Memory \n- Scaling Superconducting Quantum Computers with Chiplet Architectures \n- Towards Developing High Performance RISC-V Processors Using Agile Methodology \n- A Data-Centric Accelerator for High-Performance Hypergraph Processing \n- DPU-v2: Energy-Efficient Execution of Irregular Directed Acyclic Graphs\n- 3D-FPIM: An Extreme Energy-Efficient DNN Acceleration System Using 3D NAND Flash-Based In-Situ PIM Unit\n- DeepBurning-SEG: Generating DNN Accelerators of Segment-Grained Pipeline Architecture\n- ANT: Exploiting Adaptive Numerical Data Type for Low-Bit Deep Neural Network Quantization \n- Sparseloop: An Analytical Approach to Sparse Tensor Accelerator Modeling \n- Ristretto: An Atomized Processing Architecture for Sparsity-Condensed Stream Flow in CNN\n\n### 2023 ISSCC\n- MetaVRain: A 133mW Real-Time Hyper-Realistic 3D-NeRF Processor with 1D-2D Hybrid-Neural Engines for Metaverse on Mobile Devices\n- A 22-nm 832-kb Hybrid-Domain Floating-Point SRAM In-Memory-Compute Macro with 16.2-70.2TFLOPS\u002FW for High-Accuracy AI-Edge Devices\n- A 28-nm 64-kb 31.6-TFLOPS\u002FW Digital-domain Floating-Point-Computing-Unit and Double-bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs\n- A 28-nm 38-to-102-TOPS\u002FW 8-b Multiply-Less Approximate Digital SRAM Compute-In-Memory Macro for Neural-Network Inference\n- A 4-nm 6163-TOPS\u002FW\u002Fb 4790-TOPS\u002Fmm2\u002Fb SRAM based Digital-Computing-in-Memory Macro Supporting Bit-Width Flexibility and Simultaneous MAC and Weight Update\n- A 28-nm Horizontal-weight-shift and Vertical-feature-shift based Separate-wordline 6T-SRAM Computation-in-Memory Unit-Macro for Edge Depthwise Neural-Networks\n- A 70.85-86.27-TOPS\u002FW PVT-Insensitive 8-b Word-Wise ACIM with Post Processing Relaxation\n- CV-CIM: A 28-nm XOR-derived Similarity-aware Computation-In-Memory For Cost Volume Construction\n- A 22-nm Delta-Sigma Computing-In-Memory (ΔΣCIM) SRAM Macro with Near-Zero-Mean Outputs and LSB-First ADCs Achieving 21.38TOPS\u002FW for 8b-MAC Edge AI Processing\n- CTLE-Ising: A 1440-Spin Continuous-Time Latch-based Ising Machine with One-Shot Fully-Parallel Spin Updates Featuring Equalization of Spin States\n- A 7nm ML Training Processor with Wave Clock Distribution\n- A 1mW Always-on Computer Vision Deep Learning Neural Decision Processor\n- MulTCIM: A 28nm 2.24μJ\u002FToken Attention-Token-Bit Hybrid Sparse Digital CIM-Based Accelerator for Multimodal Transformers\n- A 28nm 53.8TOPS\u002FW 8b Sparse Transformer Accelerator with In-Memory Butterfly Zero Skipper for Unstructured-Pruned NN and CIM-Based Local-Attention-Reusable Engine\n- A 28nm 16.9-300TOPS\u002FW Computing-in-Memory Processor Supporting Floating-Point NN Inference\u002FTraining with Intensive-CIM Sparse-Digital Architecture\n- TensorCIM: A 28nm 3.7nJ\u002FGather and 8.3TFLOPS\u002FW FP32 Digital-CIM Tensor Processor for MCM-CIM-Based Beyond-NN Acceleration\n- DynaPlasia: An eDRAM In-Memory-Computing-Based Reconfigurable Spatial Accelerator with Triple-Mode Cell for Dynamic Resource Switching\n- A Nonvolatile AI-Edge Processor with 4MB SLC-MLC Hybrid-Mode ReRAM Compute-in-Memory Macro and 51.4-251TOPS\u002FW\n- A 40-310TOPS\u002FW SRAM-Based All-Digital Up to 4b In-Memory Computing Multi-Tiled NN Accelerator in FD-SOI 18nm for Deep-Learning Edge Applications\n- A 12.4TOPS\u002FW @ 136GOPS AI-IoT System-on-Chip with 16 RISC-V, 2-to-8b Precision-Scalable DNN Acceleration and 30%-Boost Adaptive Body Biasing\n- A 28nm 2D\u002F3D Unified Sparse Convolution Accelerator with Block-Wise Neighbor Searcher for Large-Scaled Voxel-Based Point Cloud Network\n- A 127.8TOPS\u002FW Arbitrarily Quantized 1-to-8b Scalable-Precision Accelerator for General-Purpose Deep Learning with Reduction of Storage, Logic and Latency Waste\n- A 28nm 11.2TOPS\u002FW Hardware-Utilization-Aware Neural-Network Accelerator with Dynamic Dataflow\n- C-DNN: A 24.5-85.8TOPS\u002FW Complementary-Deep-Neural-Network Processor with Heterogeneous CNN\u002FSNN Core Architecture and Forward-Gradient-Based Sparsity Generation\n- ANP-I: A 28nm 1.5pJ\u002FSOP Asynchronous Spiking Neural Network Processor Enabling Sub-0.1μJ\u002FSample On-Chip Learning for Edge-AI Applications\n- DL-VOPU: An Energy-Efficient Domain-Specific Deep-Learning-based Visual Object Processing Unit Supporting Multi-Scale Semantic Feature Extraction for Mobile Object Detection\u002FTracking Applications\n- A 0.81mm2 740μW Real-Time Speech Enhancement Processor Using Multiplier-Less PE Arrays for Hearing Aids in 28nm CMOS\n- A 12nm 18.1TFLOPs\u002FW Sparse Transformer Processor with Entropy-Based Early Exit, Mixed-Precision Predication and Fine-Grained Power Management\n\n### 2023 HPCA\n- SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators\n- PhotoFourier: A Photonic Joint Transform Correlator-Based Neural Network Accelerator\n- INCA: Input-stationary Dataflow at Outside-the-box Thinking about Deep Learning Accelerators\n- GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for Memory-Efficient Graph Convolutional Neural Networks\n- Logical\u002FPhysical Topology-Aware Collective Communication in Deep Learning Training\n- Sibia: Signed Bit-slice Architecture for Dense DNN Acceleration with Slice-level Sparsity Exploitation\n- Baryon: Efficient Hybrid Memory Management with Compression and Sub-Blocking\n- iCACHE: An Importance-Sampling-Informed Cache for Accelerating I\u002FO-Bound DNN Model Training\n- HIRAC: A Hierarchical Accelerator with Sorting-based Packing for SpGEMMs in DNN Applications\n- VEGETA: Vertically-Integrated Extensions for Sparse\u002FDense GEMM Tile Acceleration on CPUs\n- ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design\n- Leveraging Domain Information for the Efficient Automated Design of Deep Learning Accelerators\n- DIMM-Link: Enabling Efficient Inter-DIMM Communication for Near-Memory Processing\n- Post0-VR: Enabling Universal Realistic Rendering for Modern VR via Exploiting Architectural Similarity and Data Sharing\n- ParallelNN: A Parallel Octree-based Nearest Neighbor Search Accelerator for 3D Point Clouds\n- ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with Linear Taylor Attention\n- CTA: Hardware-Software Co-design for Compressed Token Attention Mechanism\n- HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers\n- GraNDe: Near-Data Processing Architecture With Adaptive Matrix Mapping for Graph Convolutional Networks\n- DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling\n- CEGMA: Coordinated Elastic Graph Matching Acceleration for Graph Matching Networks\n- ISOSceles: Accelerating Sparse CNNs through Inter-Layer Pipelining\n- OptimStore: In-Storage Optimization of Large Scale DNNs with On-Die Processing\n- MERCURY: Accelerating DNN Training By Exploiting Input Similarity\n- Dalorex: A Data-Local Program Execution and Architecture for Memory-bound Applications\n- eNODE: Energy-Efficient and Low-Latency Edge Inference and Training of Neural ODEs\n- MoCA: Memory-Centric, Adaptive Execution for Multi-Tenant Deep Neural Networks\n- Mix-GEMM: An efficient HW-SW Architecture for Mixed-Precision Quantized Deep Neural Networks Inference on Edge Devices\n- FlowGNN: A Dataflow Architecture for Real-Time Workload-Agnostic Graph Neural Network Inference\n- Chimera: An Analytical Optimizing Framework for Effective Compute-intensive Operators Fusion\n- Securator: A Fast and Secure Neural Processing Unit\n\n### 2023 ASPLOS\n- Overlap Communication with Dependent Computation via Decomposition in Large Deep Learning Models\n- Heron: Automatically Constrained High-performance Library Generation for Deep Learning Accelerators\n- TelaMalloc: Efficient On-Chip Memory Allocation for Production Machine Learning Accelerators \n- EVStore: Storage and Caching Capabilities for Scaling Embedding Tables in Deep Recommendation Systems \n- WACO: Learning Workload-Aware Co-optimization of the Format and Schedule of a Sparse Tensor Program\n- GRACE: A Scalable Graph-Based Approach To Accelerating Recommendation Model Inference\n- Mapping Very Large Scale Spiking Neuron Network to Neuromorphic Hardware \n- HuffDuff: Stealing Pruned DNNs from Sparse Accelerators \n- ABNDP: Co-Optimizing Data Access and Load Balance in Near-Data Processing\n- Infinity Stream: Portable and Programmer-Friendly In-\u002FNear-Memory Fusion\n- Flexagon: A Multi-Dataflow Sparse-Sparse Matrix Multiplication Accelerator for Efficient DNN Processing \n- Accelerating Sparse Data Orchestration via Dynamic Reflexive Tiling \n- SPADA: Accelerating Sparse Matrix Multiplication with Adaptive Dataflow\n- SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning \n- Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs \n- The Sparse Abstract Machine\n- Homunculus: Auto-Generating Efficient Data-Plane ML Pipelines for Datacenter Networks\n- TensorIR: An Abstraction for Automatic Tensorized Program Optimization\n- FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks \n- TLP: A Deep Learning-based Cost Model for Tensor Program Tuning\n- Betty: Enabling Large-Scale GNN Training with Batch-Level Graph Partitioning \n- In-Network Aggregation with Transport Transparency for Distributed Training\n- Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression\n- DPACS: Hardware Accelerated Dynamic Neural Network Pruning through Algorithm-Architecture Co-design\n- Lucid: A Non-Intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs\n- ElasticFlow: An Elastic Serverless Training Platform for Distributed Deep Learning \n- Hyperscale Hardware Optimized Neural Architecture Search \n- MP-Rec: Hardware-Software Co-Design to Enable Multi-Path Recommendation\n\n### 2023 ISCA\n- OliVe: Accelerating Large Language Models via Hardware-friendly Outlier-Victim Pair Quantization\n- FACT: FFN-Attention Co-optimized Transformer Architecture with Eager Correlation Prediction\n- Mystique: Enabling Accurate and Scalable Generation of Production AI Benchmarks\n- Accelerating Personalized Recommendation with Cross-level Near-Memory Processing\n- Understanding and Mitigating Hardware Failures in Deep Learning Training Systems\n- LAORAM: A Look Ahead ORAM Architecture for Training Large Embedding Tables\n- Optimizing CPU Performance for Recommendation Systems At-Scale\n- SPADE: A Flexible and Scalable Accelerator for SpMM and SDDMM\n- MESA: Microarchitecture Extensions for Spatial Architecture Generation\n- FDMAX: An Elastic Accelerator Architecture for Solving Partial Differential Equations\n- RSQP: Problem-specific Architectural Customization for Accelerated Convex Quadratic Optimization\n- ECSSD: Hardware\u002FData Layout Co-Designed In-Storage-Computing Architecture for Extreme Classification\n- SAC: Sharing-Aware Caching in Multi-Chip GPUs\n- SCALO: An Accelerator-Rich Distributed System for Scalable Brain-Computer Interfacing\n- ETTE: Efficient Tensor-Train-based Computing Engine for Deep Neural Networks\n- TaskFusion: An Efficient Transfer Learning Architecture with Dual Delta Sparsity for Multi-Task Natural Language Processing\n- Inter-layer Scheduling Space Definition and Exploration for Tiled Accelerators\n- ArchGym: An Open-Source Gymnasium for Machine Learning Assisted Architecture Design\n- V10: Hardware-Assisted NPU Multi-tenancy for Improved Resource Utilization and Fairness\n- RAELLA: Reforming the Arithmetic for Efficient, Low-Resolution, and Low-Loss Analog PIM: No Retraining Required!\n- MapZero: Mapping for Coarse-grained Reconfigurable Architectures with Reinforcement Learning and Monte-Carlo Tree Search\n- TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embedding\n- A Research Retrospective on AMD’s Exascale Computing Journey\n- MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems\n- With Shared Microexponents, A Little Shifting Goes a Long Way\n\n### 2023 MICRO\n- AuRORA: Virtualized Accelerator Orchestration for Multi-Tenant Workloads\n- UNICO: Unified Hardware Software Co-Optimization for Robust Neural Network Acceleration\n- Spatula: A Hardware Accelerator for Sparse Matrix Factorization\n- Eureka: Efficient Tensor Cores for One-sided Unstructured Sparsity in DNN Inference\n- RM-STC: Row-Merge Dataflow Inspired GPU Sparse Tensor Core for Energy-Efficient Sparse Acceleration\n- Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse Multi-DNN Workloads\n- MAICC: A Lightweight Many-core Architecture with In-Cache Computing for Multi-DNN Parallel Inference\n- SRIM: A Systolic Random Increment Memory Architecture for Unary Computing\n- Improving Data Reuse in NPU On-chip Memory with Interleaved Gradient Order for DNN Training\n- TT-GNN: Efficient On-Chip Graph Neural Network Training via Embedding Reformation and Hardware Optimization\n- Si-Kintsugi: Recovering Golden-Like Performance of Defective Many-Core Spatial Architectures for AI\n- Bucket Getter: A Bucket-based Processing Engine for Low-bit Block Floating Point (BFP) DNNs\n- ADA-GP: Accelerating DNN Training By ADAptive Gradient Prediction\n- HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity\n- Exploiting Inherent Properties of Complex Numbers for Accelerating Complex Valued Neural Networks\n- Point Cloud Acceleration by Exploiting Geometric Locality\n- HARP: Hardware-Based Pseudo-Tiling for Sparse Matrix Multiplication Accelerator\n- TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators\n- TileFlow: A Framework for Modeling Fusion Dataflow via Tree-based Analysis\n\n### 2023 HotChips\n- Memory-centric Computing with SK Hynix’s Domain-Specific Memory\n- Samsung AI-cluster system with HBM-PIM and CXL-based Processing-near-Memory for transformer-based LLMs\n- A Machine Learning Supercomputer With An Optically Reconfigurable Interconnect and Embeddings Support\n- Inside the Cerebras Wafer-Scale Cluster\n- IBM NorthPole Neural Inference Machine\n- Moffett Antoum: A Deep-Sparse AI Inference System-on-Chip for Vision and Large Language Models\n- Qualcomm® Hexagon™ NPU\n\n### 2024 ISSCC\n- ATOMUS: A 5nm 32TFLOPS\u002F128TOPS ML System-on-Chip for Latency Critical Applications\n- AMD MI300 Modular Chiplet Platform – HPC and AI Accelerator for Exa-Class Systems\n- A 3D Integrated Prototype System-on-Chip for Augmented Reality Applications Using Face-to-Face Wafer-Bonded 7nm Logic at \u003C10μm Pitch with up to 40% Energy Reduction at Iso-Area Footprint\n- Metis AIPU: A 12nm 15TOPS\u002FW 209.6TOPS SoC for Cost- and Energy-Efficient Inference at the Edge\n- IBM NorthPole: An Architecture for Neural Network Inference with a 12nm Chip\n- NVE: A 3nm 23.2TOPS\u002FW 12b-Digital-CIM-Based Neural Engine for High-Resolution Visual-Quality Enhancement on Smart Devices\n- A 28nm 74.34TFLOPS\u002FW BF16 Heterogenous CIM-Based Accelerator Exploiting Denoising-Similarity for Diffusion Models\n- A 23.9TOPS\u002FW @ 0.8V, 130TOPS AI Accelerator with 16× Performance-Accelerable Pruning in 14nm Heterogeneous Embedded MPU for Real-Time Robot Applications\n- A 28nm Physics Computing Unit Supporting Emerging Physics-Informed Neural Network and Finite Element Method for Real-Time Scientific Computing on Edge Devices\n- C-Transformer: A 2.6-to-18.1μJ\u002FToken Homogeneous DNN-Transformer\u002FSpiking-Transformer Processor with Big-Little Network and Implicit Weight Generation for Large Language Models\n- LSPU: A Fully Integrated Real-Time LiDAR-SLAM SoC with Point-Neural-Network Segmentation and Multi-Level kNN Acceleration\n- NeuGPU: A 18.5mJ\u002FIter Neural-Graphics Processing Unit for Instant-Modeling and Real-Time Rendering with Segmented-Hashing Architecture\n- Space-Mate: A 303.5mW Real-Time Sparse Mixture-of-Experts-Based NeRF-SLAM Processor for Mobile Spatial Computing\n- A 28nm 83.23TFLOPS\u002FW POSIT-Based Compute-in-Memory Macro for High-Accuracy AI Application\n- A 16nm 96Kb Integer-Floating-Point Dual-Mode Gain-Cell-Computing-in-Memory Macro with 73.3-163.3TOPS\u002FW and 33.2-91.2TFLOPS\u002FW for AI-Edge Devices\n- A 22nm 64kb Lightning-Like Hybrid Computing-in-Memory Macro with Compressed Adder Tree and Analog-storage Quantizers for Transformer and CNNs\n- A 3nm 32.5 TOPS\u002FW, 55.0 TOPS\u002Fmm2 and 3.78 Mb\u002Fmm2 Fully Digital Computing-in-Memory Supporting INT12 x INT12 with Parallel MAC Architecture and Foundry 6T SRAM Bitcell\n- A 818-4094 TOPS\u002FW Capacitor-Reconfigured CIM Macro for Unified Acceleration of CNNs and Transformers\n- A 28nm 72.12-TFLOPS\u002FW Hybrid-Domain Outer-Product Based Floating-Point SRAM Computing-in-Memory Macro with Logarithm Bit-Width Folding ADC\n- A 28nm 2.4Mb\u002Fmm2 6.9-16.3 TOPS\u002Fmm2 eDRAM-LUT-Based Digital-Computing-in-Memory Macro with In-Memory Encoding and Refreshing\n- A 22nm 16Mb Floating-Point ReRAM Compute-in-Memory Macro with 31.2 TFLOPS\u002FW for AI Edge Devices\n- A Flash-SRAM-ADC Fused Plastic Computing-in-Memory Macro for Learning in Neural Networks in a Standard 14nm FinFET Process\n\n### 2024 HPCA\n- Bandwidth-Effective DRAM Cache for GPUs with Storage-Class Memory\n- Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators\n- STELLAR: Energy-Efficient and Low-Latency SNN Algorithm and Hardware Co-design with Spatiotemporal Computation\n- MIMDRAM: An End-to-End Processing-Using-DRAM System for High-Throughput, Energy-Efficient and Programmer-Transparent Multiple-Instruction Multiple-Data Computing\n- Pathfinding Future PIM Architectures by Demystifying a Commercial PIM Technology\n- Functionally-Complete Boolean Logic in Real DRAM Chips: Experimental Characterization and Analysis\n- StreamPIM: Streaming Matrix Computation in Racetrack Memory\n- SmartDIMM: In-Memory Acceleration of Upper Layer Protocols\n- BeaconGNN: Large-Scale GNN Acceleration with Asynchronous In-Storage Computing\n- Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System\n- FlashGNN: An In-SSD Accelerator for GNN Training\n- DockerSSD: Containerized In-Storage Processing and Hardware Acceleration for Computational SSDs\n- SPADE: Sparse Pillar-based 3D Object Detection Accelerator for Autonomous Driving\n- Rapper: A Parameter-Aware Repair-in-Memory Accelerator for Blockchain Storage Platform\n- MOPED: Efficient Motion Planning Engine with Flexible Dimension Support\n- TALCO: Tiling Genome Sequence Alignment using Convergence of Traceback Pointers\n- ECO-CHIP: Estimation of the Carbon Footprint of Chiplet-based Architectures for Sustainable VLSI\n- Lightening-Transformer: A Dynamically-operated Optically-interconnected Photonic Transformer Accelerator\n- SACHI: A Stationarity-Aware, All-Digital, Near-Cache, Ising Architecture\n- BitWave: Exploiting Column-Based Bit-Level Sparsity for Deep Learning Acceleration\n- LUTein: Dense-Sparse Bit-slice Architecture with Radix-4 LUT-based Slice-Tensor Processing Units\n- FIGNA: Integer Unit-based Accelerator Design for FP-INT GEMM Preserving Numerical Accuracy\n- ASADI: Accelerating Sparse Attention using Diagonal-based In-situ Computing\n- An LPDDR-based CXL-PNM Platform for TCO-Efficient GPT Inference\n- HotTiles: Accelerating SpMM with Heterogeneous Accelerator Architectures\n- SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding\n- Data Motion Acceleration: Chaining Cross-Domain Multi Accelerators\n- RELIEF: Relieving Memory Pressure In SoCs Via Data Movement-Aware Accelerator Scheduling\n\n### 2024 ASPLOS\n- SpecInfer: Accelerating Large Language Model Serving with Tree-based Speculative Inference and Verification\n- ExeGPT: Constraint-Aware Resource Scheduling for LLM Inference\n- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling\n- SpotServe: Serving Generative Large Language Models on Preemptible Instances\n- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN\n- 8-bit Transformer Inference and Fine-tuning for Edge Accelerators\n- Cocco: Hardware-Mapping Co-Exploration towards Memory Capacity-Communication Optimization\n- Atalanta: A Bit is Worth a “Thousand” Tensor Values\n- Harp: Leveraging Quasi-Sequential Characteristics to Accelerate Sequence-to-Graph Mapping of Long Reads\n- GSCore: Efficient Radiance Field Rendering via Architectural Support for 3D Gaussian Splatting\n- BeeZip: Towards An Organized and Scalable Architecture for Data Compression\n- ACES: Accelerating Sparse Matrix Multiplication with Adaptive Execution Flow and Concurrency-Aware Cache Optimizations\n- Explainable-DSE: An Agile and Explainable Exploration of Efficient HW\u002FSW Codesigns of Deep Learning Accelerators Using Bottleneck Analysis\n- AttAcc! Unleashing the Power of PIM for Batched Transformer-based Generative Model Inference\n- SpecPIM: Accelerating Speculative Inference on PIM-Enabled System via Architecture-Dataflow Co-Exploration\n- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization\n- NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing\n- FEASTA: A Flexible and Efficient Accelerator for Sparse Tensor Algebra in Machine Learning\n- CMC: Video Transformer Acceleration via CODEC Assisted Matrix Condensing\n- Tandem Processor: Grappling with Emerging Operators in Neural Networks\n- Carat: Unlocking Value-Level Parallelism for Multiplier-Free GEMMs\n- ORIANNA: An Accelerator Generation Framework for Optimization-based Robotic Applications\n- SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile\n- Dr. DNA: Combating Silent Data Corruptions in Deep Learning using Distribution of Neuron Activations\n- RECom: A Compiler Approach to Accelerate Recommendation Model Inference with Massive Embedding Columns\n- NDPipe: Exploiting Near-data Processing for Scalable Inference and Continuous Training in Photo Storage\n- Fractal: Joint Multi-Level Sparse Pattern Tuning of Accuracy and Performance for DNN Pruning\n- Optimizing Dynamic-Shape Neural Networks on Accelerators via On-the-Fly Micro-Kernel Polymerization\n- DTC-SpMM: Bridging the Gap in Accelerating General Sparse Matrix Multiplication with Tensor Cores\n- SoD2: Statically Optimizing Dynamic Deep Neural Network Execution\n- BVAP: Energy and Memory Efficient Automata Processing for Regular Expressions with Bounded Repetitions\n- IANUS: Integrated Accelerator based on NPU-PIM Unified Memory System\n- PIM-STM: Software Transactional Memory for Processing-In-Memory Systems\n- CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators\n\n### 2024 HotChips\n- NVIDIA Blackwell GPU: Advancing Generative AI and Accelerated Computing\n- SambaNova SN40L RDU: Breaking the Barrier of Trillion+ Parameter Scale Gen AI Computing\n- Tiny Architecture Optimizations Impact Massively Scaled GenAI Systems\n- AMD Instinct MI300X Generative AI Accelerator and Platform Architecture\n- An AI Compute ASIC with Optical Attach to Enable Next Generation Scale-up Architectures\n- RNGD: A Tensor Contraction Processor\n- Next Generation AMD Versal AI Edge Series for Vision and Automotive\n- Onyx: A Programmable Accelerator for Sparse Tensor Algebra\n- Next Gen MTIA - Meta's Recommendation Inference Accelerator\n\n### 2024 ISCA\n- ReAIM: A ReRAM-based Adaptive Ising Machine for Solving Combinatorial Optimization Problems\n- Splitwise: Efficient Generative LLM Inference Using Phase Splitting\n- Mind the Gap: Attainable Data Movement and Operational Intensity Bounds for Tensor Algorithms\n- A Reconfigurable Accelerator with Data Reordering Support for Low-Cost On-Chip Dataflow Switching\n- Waferscale Network Switches\n- PID-Comm: A Fast and Flexible Collective Communication Framework for Commodity Processing-in-DIMMs\n- PreSto: An In-Storage Data Preprocessing System for Training Recommendation Models\n- pSyncPIM: Partially Synchronous Execution of Sparse Matrix Operations for All-bank PIM Architectures\n- NDSearch: Accelerating Graph-Traversal-Based Approximate Nearest Neighbor Search through Near Data Processing\n- Enabling Efficient Large Recommendation Model Training with Near CXL Memory Processing\n- Exploiting Similarity Opportunity of Emerging AI Models on 3D Hybrid Bonding Architecture\n- NDPBridge: Enabling Cross-Bank Coordination in Near-DRAM-Bank Processing Architectures\n- UM-PIM: DRAM-based PIM with Uniform & Shared Memory Space\n- MegIS: High-Performance and Low-Cost Metagenomic Analysis with In-Storage Processing\n- On Error Correction for Nonvolatile PiM\n- MAD Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems\n- Cambricon-D: Full-Network Differential Acceleration for Diffusion Models\n- Flagger: Cooperative Acceleration for Large-Scale Cross-Silo Federated Learning Aggregation\n- Trapezoid: A Versatile Accelerator for Dense and Sparse Matrix Multiplications\n- NeuraChip: Accelerating GNN Computations with a Hash-based Decoupled Spatial Accelerator\n- Soter: Analytical Tensor-Architecture Modeling and Automatic Tensor Program Tuning for Spatial Accelerators\n- ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV Caching\n- Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference\n- MECLA: Memory-Compute-Efficient LLM Accelerator with Scaling Sub-matrix Partition\n- Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization\n- Heterogeneous Acceleration Pipeline for Recommendation System Training\n- LLMCompass: Enabling Efficient Hardware Design for Large Language Model Inference\n\n### 2024 MICRO\n- CamPU: A Multi-Camera Processing Unit for Deep Learning-based 3D Spatial Computing Systems\n- AdapTiV: Sign-Similarity based Image-Adaptive Token Merging for Vision Transformer Acceleration\n- Fusion-3D: Integrated Acceleration for Instant 3D Reconstruction and Real-Time Rendering\n- A Mess of Memory System Benchmarking, Simulation and Application Profiling\n- Stellar: An Automated Design Framework for Dense and Sparse Spatial Accelerators\n- LUCIE: A Universal Chiplet-Interposer Design Framework for Plug-and-Play Integration\n- SRender: Boosting Neural Radiance Field Efficiency via Sensitivity-Aware Dynamic Precision Rendering\n- EMP: Efficient 4-bit Matrix Unit via Primitivization\n- BBS: Bi-directional Bit-level Sparsity for Deep Learning Acceleration\n- SCAR: Scheduling Multi-Model AI Workloads on Heterogeneous Multi-Chiplet Module Accelereators\n- SCALE: A Structure-Centric Accelerator for Message Passing Graph Neural Networks\n- Low-overhead General-purpose Near-Data Processing in CXL Memory Expanders\n- PIFS-Rec: Process-In-Fabric-Switch for Large-Scale Recommendation System Inferences\n- PIM-MMU: A Memory Management Unit for Accelerating Data Transfers in Commercial PIM Systems\n- Azul: An Accelerator for Sparse Iterative Solvers Leveraging Distributed On-Chip Memory\n- FloatAP: Supporting High-Performance Floating-Point Arithmetic in Associative Processors\n- COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with Adaptive Spike Speculation\n- SOFA: A Compute-Memory Optimized Sparsity Accelerator via Cross-Stage Coordinated Tiling\n- Leviathan: A Unified System for General-Purpose Near-Data Computing\n- TMiner: A Vertex-Based Task Scheduling Architecture for Graph Pattern Mining\n- PointCIM: A Computing-in-Memory Architecture for Accelerating Deep Point Cloud Analytics\n- SambaNova SN40L: Scaling the AI Memory Wall with Dataflow and Composition of Experts\n- Duplex: A Device for Large Language Models with Mixture of Experts, Grouped Query Attention, and Continuous Batching\n- VGA: Hardware Accelerator for Scalable Long Sequence Model Inference\n- FuseMax: Leveraging Extended Einsums to Optimize Attention Accelerator Design\n- FlashLLM: A Chiplet-Based In-Flash Computing Architecture to Enable On-Device Inference of 70B LLM\n- GauSPU: 3D Gaussian Splatting Processor for Real-Time SLAM Systems\n- PyPIM: Integrating Digital Processing-in-Memory from Microarchitectural Design to Python Tensors\n- Stream-Based Data Placement for Near-Data Processing with Extended Memory\n- FiboCIM: a Fibonacci-coded Charge-domain SRAM-based CIM Accelerator for DNN Inference\n- MeMCISA: Memristor-enabled Memory-Centric Instruction-Set Architecture for Database Systems\n\n### 2025 ISSCC\n- 1.78mJ\u002FFrame 373fps 3D GS Processor Based on Shape-Aware Hybrid Architecture Using Earlier Computation Skipping and Gaussian Cache Scheduler\n- IRIS: A 8.55mJ\u002Fframe Spatial Computing SoC for Interactable Rendering and Surface-Aware Modeling with 3D Gaussian Splatting\n- A 16nm 216kb, 188.4TOPS\u002FW and 133.5TFLOPS\u002FW Microscaling Multi-Mode Gain-Cell CIM Macro Edge-AI Devices\n- A 51.6TFLOPs\u002FW Full-Datapath CIM Macro Approaching Sparsity Bound and \u003C2-30 Loss for Compound AI\n- RNGD: A 5nm Tensor-Contraction Processor for Power-Efficient Inference on Large Language Models\n- An On-Device Generative AI Focused Neural Processing Unit in 4nm Flagship Mobile SoC with Fan-Out Wafer-Level Package\n- SambaNova SN40L: A 5nm 2.5D Dataflow Accelerator with Three Memory Tiers for Trillion Parameter AI\n- T-REX: A 68-to-567μs\u002FToken 0.41-to-3.95μJ\u002FToken Transformer Accelerator with Reduced External Memory Access and Enhanced Hardware Utilization in 16nm FinFET\n- 28nm 0.22μJ\u002FToken Memory-Compute-Intensity-Aware CNN-Transformer Accelerator with Hybrid-Attention-Based Layer-Fusion and Cascaded Pruning for Semantic-Segmentation\n- EdgeDiff: 418.4mJ\u002FInference Multi-Modal Few-Step Diffusion Model Accelerator with Mixed-Precision and Reordered Group Quantization\n- Nebula: A 28nm 109.8TOPS\u002FW 3D PNN Accelerator Featuring Adaptive Partition, Multi-Skipping, and Block-Wise Aggregation\n- MAE: A 3nm 0.168mm2 576MAC Mini AutoEncoder with Line-based Depth-First Scheduling for Generative AI in Vision on Edge Devices\n- MEGA.mini: A Universal Generative AI Processor with a New Big\u002FLittle Core Architecture for NPU\n- BROCA: A 52.4-to-559.2mW Mobile Social Agent System-on-Chip with Adaptive Bit-Truncate Unit and Acoustic-Cluster Bit Grouping\n- An 88.36TOPS\u002FW Bit-Level-Weight-Compressed Large-Language-Model Accelerator with Cluster-Aligned INT-FP-GEMM and Bi-Dimensional Workflow Reformulation\n- Slim-Llama: A 4.69mW Large-Language-Model Processor with Binary\u002FTernary Weights for Billion-Parameter Llama Model\n- HuMoniX: A 57.3fps 12.8TFLOPS\u002FW Text-to-Motion Processor with Inter-Iteration Output Sparsity and Inter-Frame Joint Similarity\n- A 22nm 60.81TFLOPS\u002FW Diffusion Accelerator with Bandwidth-Aware Memory Partition and BL-Segmented Compute-in-Memory for Efficient Multi-Task Content Generation\n\n### 2025 HPCA\n- eDKM: An Efficient and Accurate Train-Time Weight Clustering for Large Language Models\n- SoMA: Identifying, Exploring, and Understanding the DRAM Communication Scheduling Space for DNN Accelerators\n- LUT-DLA: Lookup Table as Efficient Extreme Low-Bit Deep Learning Accelerator\n- BitMoD: Bit-serial Mixture-of-Datatype LLM Acceleration\n- FIGLUT: An Energy-Efficient Accelerator Design for FP-INT GEMM Using Look-Up Tables\n- MANT: Efficient Low-bit Group Quantization for LLMs via Mathematically Adaptive Numerical Type\n- Enhancing Large-Scale AI Training Efficiency: The C4 Solution for Real-Time Anomaly Detection and Communication Optimization\n- Revisiting Reliability in Large-Scale Machine Learning Research Clusters\n- Anda: Unlocking Efficient LLM Inference with a Variable-Length Grouped Activation Data Format\n- LAD: Efficient Accelerator for Generative Inference of LLM with Locality Aware Decoding\n- VQ-LLM: High-performance Code Generation for Vector Quantization Augmented LLM Inference\n- InstAttention: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference\n- PIMnet: A Domain-Specific Network for Efficient Collective Communication in Scalable PIM\n- EIGEN: Enabling Efficient 3DIC Interconnect with Heterogeneous Dual-Layer Network-on-Active-Interposer\n- PAISE: PIM-Accelerated Inference Scheduling Engine for Transformer-based LLM\n- FACIL: Flexible DRAM Address Mapping for SoC-PIM Cooperative On-device LLM Inference\n- Lincoln: Real-Time 50~100B LLM Inference on Consumer Devices with LPDDR-Interfaced, Compute-Enabled Flash Memory\n- Make LLM Inference Affordable to Everyone: Augmenting GPU Memory with NDP-DIMM\n\n### 2025 ASPLOS\n- DynaX: Sparse Attention Acceleration with Dynamic X:M Fine-Grained Structured Pruning\n- ReCA: Integrated Acceleration for Real-Time and Efficient Cooperative Embodied Autonomous Agents\n- Fast On-device LLM Inference with NPUs\n- Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow\n- FlexSP: Accelerating Large Language Model Training via Flexible Sequence Parallelism\n- Spindle: Efficient Distributed Training of Multi-Task Large Models via Wavefront Scheduling\n- Concerto: Automatic Communication Optimization and Scheduling for Large-Scale Deep Learning\n- MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs\n- FSMoE: A Flexible and Scalable Training System for Sparse Mixture-of-Experts Models\n- CoServe: Efficient Collaboration-of-Experts (CoE) Model Inference with Limited Memory\n- Klotski: Efficient Mixture-of-Expert Inference via Expert-Aware Multi-Batch Pipeline\n- MoC-System: Efficient Fault Tolerance for Sparse Mixture-of-Experts Model Training\n- Accelerating LLM Serving for Multi-turn Dialogues with Efficient Resource Management\n- COMET: Towards Practical W4A4KV4 LLMs Serving\n- Past-Future Scheduler for LLM Serving under SLA Guarantees\n- POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference\n- TAPAS: Thermal- and Power-Aware Scheduling for LLM Inference in Cloud Platforms\n- PAPI: Exploiting Dynamic Parallelism in Large Language Model Decoding with a Processing-In-Memory-Enabled Computing System\n- Be CIM or Be Memory: A Dual-mode-aware DNN Compiler for CIM Accelerators\n\n### 2025 ISCA\n- WSC-LLM: Efficient LLM Service and Architecture Co-exploration for Wafer-scale Chips\n- FRED: A Wafer-scale Fabric for 3D Parallel DNN Training\n- PD Constraint-aware Physical\u002FLogical Topology Co-Design for Network on Wafer\n- H2-LLM: Hardware-Dataflow Co-Exploration for Heterogeneous Hybrid-Bonding-based Low-Batch LLM Inference\n- HiPER: Hierarchically-Composed Processing for Efficient Robot Learning-Based Control\n- Dadu-Corki: Algorithm-Architecture Co-Design for Embodied AI-powered Robotic Manipulation\n- SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting\n- Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization\n- Chimera: Communication Fusion for Hybrid Parallelism in Large Language Models\n- LUT Tensor Core: A Software-Hardware Co-Design for LUT-Based Low-Bit LLM Inference\n- AiF: Accelerating On-Device LLM Inference Using In-Flash Processing\n- LIA: A Single-GPU LLM Inference Acceleration with Cooperative AMX-Enabled CPU-GPU Computation and CXL Offloading\n- Cramming a Data Center into One Cabinet: A Co-Exploration of Computing and Hardware Architecture of Waferscale Chip\n- Ecco: Improving Memory Bandwidth and Capacity for LLMs via Entropy-Aware Cache Compression\n- Hybe: GPU-NPU Hybrid System for Efficient LLM Inference with Million-Token Context Window\n- MeshSlice: Efficient 2D Tensor Parallelism for Distributed DNN Training\n- AIM: Software and Hardware Co-design for Architecture-level IR-drop Mitigation in High-performance PIM\n- OptiPIM: Optimizing Processing-in-Memory Acceleration Using Integer Linear Programming\n- HeterRAG: Heterogeneous Processing-in-Memory Acceleration for Retrieval-augmented Generation\n- ATiM: Autotuning Tensor Programs for Processing-in-DRAM\n- Hybrid SLC-MLC RRAM Mixed-Signal Processing-in-Memory Architecture for Transformer Acceleration via Gradient Redistribution\n- MagiCache: A Virtual In-Cache Computing Engine\n- AMALI: An Analytical Model for Accurately Modeling LLM Inference on Modern GPUs\n- MicroScopiQ: Accelerating Foundational Models through Outlier-Aware Microscaling Quantization\n- HYTE: Flexible Tiling for Sparse Accelerators via Hybrid Static-Dynamic Approaches\n- NUPEA: Optimizing Critical Loads on Spatial Dataflow Architectures via Non-Uniform Processing-Element Access\n- Meta's Second Generation AI Chip: Model-Chip Co-Design and Productionization Experiences\n- Scaling Llama 3 Training with Efficient Parallelism Strategies\n- Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures\n- BingoGCN: Towards Scalable and Efficient GNN Acceleration with Fine-Grained Partitioning and SLT\n\n### 2025 HotChips\n- Memory: (Almost) the Only Thing That Matters\n- UB-Mesh: Huawei's Next-Gen AI SuperComputer with A Unified-Bus Interconnect and nD-FullMesh Architecture\n- Corsair - An In-memory Computing Chiplet Architecture for Inference-time Compute Acceleration\n- NVIDIA's GB10 SoC: AI Supercomputer On Your Desk\n- 4th Gen AMD CDNA™ Generative AI Architecture Powering AMD Instinct™ MI350 Series Accelerators and Platforms\n- Ironwood : Delivering best in class perf, perf\u002FTCO and perf\u002FWatt for reasoning model training and serving\n\n### 2025 MICRO\n- Stratum: System-Hardware Co-design with Tiered Monolithic 3D-Stackable DRAM for Efficient MoE Serving. (UC San Diego, Georgia Tech, UIUC, Illinois Tech)\n- Kelle: Co-design KV Caching and eDRAM for Efficient LLM Serving in Edge Computing. (New York University)\n- LongSight: Compute-Enabled Memory to Accelerate Large-Context LLMs via Sparse Attention. (Cornell University)\n- ComPASS: A Compatible PIM Protocol Architecture and Scheduling Solution for Processor-PIM Collaboration. (Inha University)\n- PIM-CCA: An Efficient PIM Architecture with Optimized Integration of Configurable Functional Units. (Yonsei University, KAIST, Hanyang University)\n- 3D-PATH: A Hierarchy LUT Processing-in-memory Accelerator with Thermal-aware Hybrid Bonding Integration. (Tsinghua University, Shanghai Jiao Tong University)\n- DECA: A Near-Core LLM Decompression Accelerator Grounded on a 3D Roofline Model. (Intel, UIUC)\n- StreamTensor: Make Tensors Stream in Dataflow Accelerators for LLMs. (UIUC, Inspirit IoT)\n- Chameleon: Adaptive Caching and Scheduling for Many-Adapter LLM Inference Environments. (UIUC, IBM Research)\n- Coruscant: Co-Designing GPU Kernel and Sparse Tensor Core to Advocate Unstructured Sparsity in Efficient LLM Inference. (University of Maryland, d-Matrix)\n- Accelerating Retrieval Augmented Language Model via PIM and PNM Integration. (Yonsei University, Santa Clara University)\n- HEAT: NPU-NDP Heterogeneous Architecture for Transformer-Empowered Graph Neural Networks. (Shanghai Jiao Tong University, Chinese Academy of Sciences)\n- RayN: Ray Tracing Acceleration with Near-memory Computing. (University of British Columbia)\n- Pimba: A Processing-in-Memory Acceleration for Post-Transformer Large Language Model Serving. (KAIST, Uppsala University, Georgia Tech)\n- GateBleed: Exploiting On-Core Accelerator Power Gating for High Performance and Stealthy Attacks on AI. (North Carolina State University, Intel)\n- Athena: Accelerating Quantized Convolutional Neural Networks under Fully Homomorphic Encryption. (Institute of Computing Technology CAS, University of Electronic Science and Technology of China)\n- ccAI: A Compatible and Confidential System for AI Computing. (University of Science and Technology, The Hong Kong Polytechnic University, Ant Group, Southern University of Science and Technology)\n- Ironman: Accelerating Oblivious Transfer Extension for Privacy-Preserving AI with Near-Memory Processing. (Peking University, State Key Laboratory of Cryptology, Alibaba Group, Tsinghua University)\n- S-DMA: Sparse Diffusion Models Acceleration via Spatiality-Aware Prediction and Dimension-Adaptive Dataflow. (Southeast University)\n- LLM.265: Video Codes are Secretly Tensor Codes. (Duke University, Carnegie Mellon University)\n- HLX: A Unified Pipelined Architecture for Optimized Performance of Hybrid Transformer-Mamba Language Models. (KAIST)\n- ORCHES: Orchestrated Test-Time-Compute-based LLM Reasoning on Collaborative GPU-PIM HEterogeneous System. (Georgia Institute of Technology)\n- NetZIP: Algorithm\u002FHardware Co-design of In-network Lossless Compression for Distributed Large Model Training. (UIUC, IBM Research)\n- Characterizing the Efficiency of Distributed Training: A Power, Performance, and Thermal Perspective. (Georgia Tech)\n- SkipReduce: (Interconnection) Network Sparsity to Accelerate Distributed Machine Learning. (KAIST, NVIDIA, Hanyang University)\n- Optimizing All-to-All Collective Communication with Fault Tolerance on Torus Networks. (HKUST(GZ), Huawei)\n- AxCore: A Quantization-Aware Approximate GEMM Unit for LLM Inference. (HKUST(GZ))\n- Amove: Accelerating LLMs through Mitigating Outliers and Salient Points via Fine-Grained Grouped Vectorized Data Type. (Beihang University, Tsinghua University)\n- MX+: Pushing the Limits of Microscaling Formats for Efficient Large Language Model Serving. (Seoul National University)\n- ReGate: Enabling Power Gating in Neural Processing Units. (UIUC)\n- Multi-Dimensional ML-Pipeline Optimization in Cost-Effective Disaggregated Datacenter. (Pennsylvania State University, META, IBM, AMD)\n- Crane: Inter-Layer Scheduling Framework for DNN Inference and Training Co-Support on Tiled Architecture. (Rutgers University, Texas A&M University, NVIDIA)\n- OASIS: A Commercial High Performance Terminal AI Processor Supporting RISCV Tensor Extension Instructions. (Beijing University of Posts and Telecommunications, Sophgo Technologies)\n- ELK: Exploring the Efficiency of Inter-core Connected AI Chips with Deep Learning Compiler Techniques. (UIUC, Microsoft Research)\n- Empowering Vector Architectures for ML: The CAMP Architecture for Matrix Multiplication. (Barcelona Supercomputing Center, Polytechnic University of Catalonia)\n- TAIDL: Tensor Accelerator ISA Definition Language with Auto-generation of Scalable Test Oracles. (UIUC)\n- Characterizing and Optimizing Realistic Workloads on a Commercial Compute-in-SRAM Device. (Cornell University, University of Southern California, MIT, GSI Inc.)\n- SuperMesh: Energy-Efficient Collective Communications for Accelerators. (Texas A&M University)\n- BitL: A Hybrid Bit-Serial and Parallel Deep Learning Accelerator for Critical Path Reduction. (Yonsei University, Samsung Electronics)\n- HiPACK: Efficient Sub-8-Bit Direct Convolution with SIMD and Bitwise Management. (National University of Singapore, Tiangong University)\n- MCBP: A Memory-Compute Efficient LLM Inference Accelerator Leveraging Bit-Slice-enabled Sparsity and Repetitiveness. (Tsinghua University, Shanghai Jiao Tong University)\n- PolymorPIC: Embedding Polymorphic Processing-in-Cache in RISC-V based Processor for Full-stack Efficient AI Inference. (Shanghai Jiao Tong University, Shanghai AI Lab)\n- MHE-TPE: Multi-Operand High-Radix Encoder for Mixed-Precision Fixed-Point Tensor Processing Engines. (USTC, University of Washington, Raytron Technology)\n- SMX: Heterogeneous Architecture for Universal Sequence Alignment Acceleration. (Barcelona Supercomputing Center, UPC, Cornell University)\n\n### 2026 HPCA\n- Focus: A Streaming Concentration Architecture for Efficient Vision-Language Models\n- PADE: A Predictor-Free Sparse Attention Accelerator via Unified Execution and Stage Fusion\n- HR-DCIM: High-Reliability Floating-Point Digital CIM Architecture with Unified Low-Cost Iterative Error Correction\n- WATOS: Efficient LLM Training Strategies and Architecture Co-exploration for Wafer-scale Chip\n- ELORA: Efficient LoRA and KV Cache Management for Multi-LoRA LLM Serving\n- FACE: Fully Overlapped PD Scheduling and Multi-Level Architecture Co-Exploration on Wafer\n- TEMP: A Memory Efficient Physical-aware Tensor Partition-Mapping Framework on Wafer-scale Chips\n- AQPIM: Breaking the PIM Capacity Wall for LLMs with In-Memory Activation Quantization\n- MoEntwine: Unleashing the Potential of Wafer-scale Chips for Large-scale Expert Parallel Inference\n- AUM: Unleashing the Efficiency Potential of Shared Processors with Accelerator Units for LLM Serving\n- Uni-STC: Unified Sparse Tensor Core\n- PIMphony: Overcoming Bandwidth and Capacity Inefficiency in PIM-based Long-Context LLM Inference System\n- RoMe: Row Granularity Access Memory System for Large Language Models\n- VAR-Turbo: Unlocking the Potential of Visual Autoregressive Models through Dual Redundancy\n- V-Rex: Real-Time Streaming Video LLM Acceleration via Dynamic KV Cache Retrieval\n- CoCoTree: A Computation-Capable Architecture for Collective Communication in Scalable PIM\n- AutoGNN: End-to-End Hardware-Driven Graph Preprocessing for Enhanced GNN Performance\n- BitDecoding: Unlocking Tensor Cores for Long-Context LLMs with Low-Bit KV Cache\n- RPU: A Reasoning Processing Unit\n","# 硅基神经网络\n\n涂凤斌博士是香港科技大学集成电路与系统研究所的助理教授及副所长，国家自然科学基金优秀青年科学基金获得者，同时也是 InnoHK 旗下新兴智能系统 AI 芯片中心（ACCESS）的核心教员。有关涂博士的更多信息，请参阅[其个人主页](https:\u002F\u002Ffengbintu.github.io\u002F)。涂博士的主要研究方向为 AI 芯片与系统。这是一个每天都有新想法涌现的激动人心领域，因此他正在持续收集相关主题的研究工作。欢迎加入！\n\n## 目录\n - [我的贡献](#我的贡献)\n - [会议论文](#会议论文)\n   - 2014: [ASPLOS](#2014-asplos), [MICRO](#2014-micro)\n   - 2015: [ISCA](#2015-isca), [ASPLOS](#2015-asplos), [FPGA](#2015-fpga), [DAC](#2015-dac)\n   - 2016: [ISSCC](#2016-isscc), [ISCA](#2016-isca), [MICRO](#2016-micro), [HPCA](#2016-hpca), [DAC](#2016-dac), [FPGA](#2016-fpga), [ICCAD](#2016-iccad), [DATE](#2016-date), [ASPDAC](#2016-aspdac), [VLSI](#2016-vlsi), [FPL](#2016-fpl)\n   - 2017: [ISSCC](#2017-isscc), [ISCA](#2017-isca), [MICRO](#2017-micro), [HPCA](#2017-hpca), [ASPLOS](#2017-asplos), [DAC](#2017-dac), [FPGA](#2017-fpga), [ICCAD](#2017-iccad), [DATE](#2017-date), [VLSI](#2017-vlsi), [FCCM](#2017-fccm), [HotChips](#2017-hotchips)\n   - 2018: [ISSCC](#2018-isscc), [ISCA](#2018-isca), [MICRO](#2018-micro), [HPCA](#2018-hpca), [ASPLOS](#2018-asplos), [DAC](#2018-dac), [FPGA](#2018-fpga), [ICCAD](#2018-iccad), [DATE](#2018-date), [ASPDAC](#2018-aspdac), [VLSI](#2018-vlsi), [HotChips](#2018-hotchips)\n   - 2019: [ISSCC](#2019-isscc), [ISCA](#2019-isca), [MICRO](#2019-micro), [HPCA](#2019-hpca), [ASPLOS](#2019-asplos), [DAC](#2019-dac), [FPGA](#2019-fpga), [ICCAD](#2019-iccad), [ASPDAC](#2019-aspdac), [VLSI](#2019-vlsi), [HotChips](#2019-hotchips), [ASSCC](#2019-asscc)\n   - 2020: [ISSCC](#2020-isscc), [ISCA](#2020-isca), [MICRO](#2020-micro), [HPCA](#2020-hpca), [ASPLOS](#2020-asplos), [DAC](#2020-dac), [FPGA](#2020-fpga), [ICCAD](#2020-iccad), [VLSI](#2020-vlsi), [HotChips](#2020-hotchips)\n   - 2021: [ISSCC](#2021-isscc), [ISCA](#2021-isca), [MICRO](#2021-micro), [HPCA](#2021-hpca), [ASPLOS](#2021-asplos), [DAC](#2021-dac), [ICCAD](#2021-iccad), [VLSI](#2021-vlsi), [HotChips](#2021-hotchips)\n   - 2022: [ISSCC](#2022-isscc), [ISCA](#2022-isca), [MICRO](#2022-micro), [HPCA](#2022-hpca), [ASPLOS](#2022-asplos), [HotChips](#2022-hotchips)\n   - 2023: [ISSCC](#2023-isscc), [ISCA](#2023-isca), [MICRO](#2023-micro), [HPCA](#2023-hpca), [ASPLOS](#2023-asplos), [HotChips](#2023-hotchips)\n   - 2024: [ISSCC](#2024-isscc), [ISCA](#2024-isca), [MICRO](#2024-micro), [HPCA](#2024-hpca), [ASPLOS](#2024-asplos), [HotChips](#2024-hotchips)\n   - 2025: [ISSCC](#2025-isscc), [ISCA](#2025-isca), [MICRO](#2025-micro), [HPCA](#2025-hpca), [ASPLOS](#2025-asplos), [HotChips](#2025-hotchips)\n   - 2026: [HPCA](#2026-hpca)\n\n## 我的贡献\n我的主要研究兴趣是 AI 芯片与架构。关于我本人及研究工作的更多信息，请访问[我的研究主页](https:\u002F\u002Ffengbintu.github.io\u002Fresearch\u002F)。\n\n## 会议论文\n这是我收集的一些我感兴趣的与 AI 芯片相关的会议论文。\n\n### 2014 ASPLOS\n- **DianNao：面向普适机器学习的小型高吞吐加速器。** (中科院, Inria)\n\n### 2014 MICRO\n- **DaDianNao：一种机器学习超级计算机。** (中科院, Inria, 内蒙古大学)\n\n### 2015 ISCA\n- **ShiDianNao：将视觉处理推向传感器端。** (中科院, EPFL, Inria)\n\n### 2015 ASPLOS\n- **PuDianNao：一种多功能机器学习加速器。** (中科院, 中国科学技术大学, Inria)\n\n### 2015 FPGA\n- **优化基于 FPGA 的深度卷积神经网络加速器设计。** (北京大学, UCLA)\n\n### 2015 DAC\n- Reno: 一种高效率可重构类脑计算加速器设计。（匹兹堡大学、清华大学、旧金山州立大学、空军研究实验室、马萨诸塞大学）\n- 面向能效机器学习的可扩展努力分类器。（普渡大学、微软研究院）\n- 近阈值计算（NTC）区域下的设计方法学。（AMD）\n- 在 NTC 中利用性能瓶颈范式转变的机会性 Turbo 执行。（犹他州立大学）\n\n### 2016 DAC\n- **DeepBurning：自动为神经网络家族生成 FPGA 学习加速器。** (中国科学院)\n  - *硬件生成器：神经网络基础构建模块与地址生成单元（RTL）。*\n  - *编译器：动态控制流（不同模型的配置）与内存中的数据布局。*\n  - *仅报告其框架并描述部分阶段。*\n- **C-Brain：通过自适应数据级并行化驯服 CNN 多样性的深度学习加速器。** (中国科学院)\n- **为类脑架构简化深度神经网络。** (仁川国立大学)\n- **在深度神经网络中使用随机计算实现动态能耗-精度权衡。** (三星、首尔国立大学、蔚山科学技术院)\n- **近似计算范式下 JPEG 硬件的最优设计。** (明尼苏达大学、德州农工大学)\n- Perform-ML：通过平台与内容感知定制实现性能优化的机器学习。（莱斯大学、加州大学圣地亚哥分校）\n- 用于图像处理应用的基于畴壁运动“自旋忆阻器”的低功耗近似卷积计算单元。（普渡大学）\n- 类脑计算的跨层近似：从器件到电路与系统。（普渡大学）\n- 基于输入切换的 RRAM 卷积神经网络节能结构。（清华大学）\n- 适用于深度学习应用、具有高温变化免疫能力的 2.2 GHz SRAM（28nm 工艺）。（UCLA、贝尔实验室）\n\n### 2016 ISSCC\n- **面向智能物联网系统的 1.42TOPS\u002FW 深度卷积神经网络识别处理器。** (KAIST)\n- **Eyeriss：一种面向深度卷积神经网络的高能效可重构加速器。** (MIT, NVIDIA)\n- 一款集成深度学习核心、支持实时自然用户界面\u002F用户体验的 126.1mW 处理器，适用于低功耗智能眼镜系统。（KAIST）\n- 一款 502GOPS、0.984mW 双模式 ADAS SoC，配备用于汽车黑匣子系统意图预测的 RNN-FIS 引擎。（KAIST）\n- 一款带 PVT 补偿、适用于微型机器人的 0.55V 1.1mW 人工智能处理器。（KAIST）\n- 一款面向 8K 超高清应用、支持 8\u002F10 位 H.265\u002FHEVC 视频解码、处理能力达 4Gpixel\u002Fs 的芯片。（早稻田大学）\n\n### 2016 ISCA\n - **Cnvlutin：无无效神经元的深度卷积神经网络计算。**（多伦多大学，不列颠哥伦比亚大学）\n - **EIE：压缩深度神经网络上的高效推理引擎。**（斯坦福大学，清华大学）\n - **Minerva：支持低功耗、高精度的深度神经网络加速器。**（哈佛大学）\n - **Eyeriss：面向卷积神经网络的节能数据流空间架构。**（麻省理工学院，NVIDIA）\n   - *提出一个能耗分析框架。*\n   - *提出一种名为“行驻留（Row Stationary）”的节能数据流，该数据流考虑了三级复用。*\n - **Neurocube：具备高密度3D内存的可编程数字神经形态架构。**（佐治亚理工学院，SRI国际）\n   - *提出一种集成于3D DRAM中的架构，逻辑层内含类网格的片上网络（NOC）。*\n   - *详细描述了NOC中的数据移动过程。*\n - **ISAAC：基于交叉开关阵列的原位模拟运算卷积神经网络加速器。**（犹他大学，惠普实验室）\n   - *其后续改进工作已发表于《Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration》（IEEE Micro）。*\n - **基于ReRAM主存的新型存内计算架构用于神经网络计算。**（加州大学圣塔芭芭拉分校，惠普实验室，NVIDIA，清华大学）\n - **RedEye：用于连续移动视觉的模拟卷积网络图像传感器架构。**（莱斯大学）\n - **Cambricon：面向神经网络的指令集架构。**（中国科学院，加州大学圣塔芭芭拉分校）\n\n### 2016 DATE\n- **The Neuro Vector Engine：通过灵活性提升可穿戴视觉中卷积网络效率。**（埃因霍温理工大学，苏州大学，柏林工业大学）\n  - *提出一种面向CNN的SIMD加速器。*\n- **使用逻辑三维计算阵列实现卷积神经网络的高效FPGA加速。**（蔚山国立科学技术院，首尔国立大学）\n  - *计算单元在三个维度组织：Tm, Tr, Tc。*\n- **NEURODSP：面向神经网络的多用途能效优化加速器。**（CEA LIST）\n- **MNSIM：基于忆阻器的神经形态计算系统仿真平台。**（清华大学，加州大学圣塔芭芭拉分校，亚利桑那州立大学）\n- **用于汽车系统故障检测的FPGA加速人工神经网络。**（南洋理工大学，华威大学）\n- **面向人工神经网络能效突触存储的8T-6T SRAM混合设计，基于重要性驱动。**（普渡大学）\n\n### 2016 FPGA\n- **在嵌入式FPGA平台上深入运行卷积神经网络。** \\[[幻灯片](http:\u002F\u002Fwww.isfpga.org\u002Ffpga2016\u002Findex_files\u002FSlides\u002F1_2.pdf)\\]\\[[演示视频](http:\u002F\u002Fwww.isfpga.org\u002Ffpga2016\u002Findex_files\u002FSlides\u002F1_2_demo.m4v)\\]（清华大学，微软亚洲研究院）\n  - *我所见的第一项完整运行CNN全流程（包括CONV和FC层）的工作。*\n  - *指出CONV层以计算为中心，而FC层以内存为中心。*\n  - *FPGA无需重新配置资源即可运行VGG16-SVD，但卷积器仅支持k=3。*\n  - *动态精度数据量化具有创新性，但未在硬件上实现。*\n- **面向大规模卷积神经网络的吞吐量优化OpenCL FPGA加速器。** \\[[幻灯片](http:\u002F\u002Fwww.isfpga.org\u002Ffpga2016\u002Findex_files\u002FSlides\u002F1_1.pdf)\\]（亚利桑那州立大学，ARM）\n  - *在FPGA上为空间分配CONV\u002FPOOL\u002FNORM\u002FFC各层资源。*\n\n### 2016 ASPDAC\n- **FPGA上深度卷积神经网络的设计空间探索。**（加州大学戴维斯分校）\n- **LRADNN：基于低秩近似的高吞吐量、高能效深度神经网络加速器。**（香港科技大学，上海交通大学）\n- **面向物联网设备的高效嵌入式学习。**（普渡大学）\n- **ACR：为近似计算启用计算复用。**（中国科学院）\n\n### 2016 VLSI\n- **一款0.3‐2.6 TOPS\u002FW精度可扩展处理器，用于实时大规模卷积网络。**（鲁汶大学）\n  - *对不同CONV层采用动态精度，并在低精度时降低MAC阵列供电电压。*\n  - *根据ReLU稀疏性避免内存读取与MAC操作。*\n- **一款1.40mm²、141mW、898GOPS的稀疏神经形态处理器，采用40nm CMOS工艺。**（密歇根大学）\n- **一款58.6mW实时可编程目标检测器，支持多尺度多目标，在1920x1080视频上以30fps运行，采用可变形部件模型。**（麻省理工学院）\n- **在标准6T SRAM阵列中实现的机器学习分类器。**（普林斯顿大学）\n\n### 2016 ICCAD\n- **面向语音应用的深度神经网络中使用粗粒度稀疏化的高效内存压缩方法。**（亚利桑那州立大学）\n- **Memsqueezer：为嵌入式设备的深度学习加速器重构片上内存子系统架构。**（中国科学院）\n- **Caffeine：面向深度卷积神经网络的统一表示与加速方法。**（北京大学，加州大学洛杉矶分校，Falcon）\n  - *提出一种统一的卷积矩阵乘法表示法，用于在FPGA上加速CONV和FC层。*\n  - *为FC层提出一种权重主导的卷积映射方法，具备良好的数据复用性、DRAM访问突发长度和有效带宽。*\n- **BoostNoC：面向近阈值计算的高能效片上网络架构。**（犹他州立大学）\n- **面向近似人工神经网络的低功耗近似乘法器设计。**（布尔诺理工大学）\n- **神经网络设计神经网络：多目标超参数优化。**（麦吉尔大学）\n\n### 2016 MICRO\n- **从高层深度神经网络模型到 FPGA。**（佐治亚理工学院，英特尔）\n  - *为 DNN 加速器开发一种宏数据流指令集架构（macro dataflow ISA）。*\n  - *开发可扩展且高度可定制的手工优化模板设计。*\n  - *提供一种模板资源优化搜索算法，协同优化加速器架构与调度策略。*\n- **vDNN：面向可扩展、内存高效神经网络设计的虚拟化深度神经网络。**（NVIDIA）\n- **Stripes：位串行深度神经网络计算。**（多伦多大学，不列颠哥伦比亚大学）\n  - *在神经网络加速器设计中引入串行计算与低精度计算，实现精度与性能之间的权衡。*\n  - *设计一种位串行计算单元，使性能随精度降低呈线性扩展。*\n- **Cambricon-X：面向稀疏神经网络的加速器。**（中国科学院）\n- **NEUTRAMS：在类脑硬件约束下的神经网络变换与协同设计。**（清华大学，加州大学圣塔芭芭拉分校）\n- **融合层 CNN 加速器。**（石溪大学）\n  - *融合多个 CNN 层（卷积+池化），以减少输入\u002F输出数据的 DRAM 访问。*\n- **弥合大数据工作负载的 I\u002FO 性能鸿沟：一种基于 NVDIMM 的新方法。**（香港理工大学，美国国家科学基金会\u002F佛罗里达大学）\n- **用于图像处理与计算机视觉的补丁内存系统。**（NVIDIA）\n- **超低功耗自动语音识别硬件加速器。**（加泰罗尼亚理工大学）\n- **用于重用预测的感知机学习。**（德州农工大学，英特尔实验室）\n  - *训练神经网络以预测缓存块的重用情况。*\n- **云规模加速架构。**（微软研究院）\n- **通过在线数据聚类与编码降低数据移动能耗。**（罗切斯特大学）\n- **实时机器人运动规划加速器的微架构。**（杜克大学）\n- **Chameleon：适用于大内存系统的多功能实用近 DRAM 加速架构。**（伊利诺伊大学厄巴纳-香槟分校，首尔国立大学）\n\n### 2016 FPL\n- **面向大规模卷积神经网络的高性能 FPGA 加速器。**（复旦大学）\n- **克服空间型 CNN 加速器中的资源利用率不足问题。**（石溪大学）\n  - *构建多个加速器，每个专用于特定 CNN 层，而非使用单一具有统一分块参数的加速器。*\n- **在分析服务器中加速循环神经网络：FPGA、CPU、GPU 与 ASIC 的比较。**（英特尔）\n\n### 2016 HPCA\n- **用于优化 FPGA 上 OpenCL 应用的性能分析框架。**（南洋理工大学，香港科技大学，康奈尔大学）\n- **TABLA：面向加速统计机器学习的统一模板架构。**（佐治亚理工学院）\n- **忆阻玻尔兹曼机：用于组合优化与深度学习的硬件加速器。**（罗切斯特大学）\n\n### 2017 FPGA\n- **基于 Arria 10 的 OpenCL 深度学习加速器。**（英特尔）\n  - *最低带宽需求：AlexNet 卷积层的所有中间数据均缓存在片上缓冲区中，因此其架构为计算受限型。*\n  - *减少运算量：采用 Winograd 变换。*\n  - *高 DSP 利用率 + 减少计算量 → FPGA 上更高性能 → 效率可与 TitanX 竞争。*\n- **ESE：面向压缩 LSTM 的高效语音识别引擎（基于 FPGA）。**（斯坦福大学，DeepPhi，清华大学，NVIDIA）\n- **FINN：用于快速、可扩展二值化神经网络推理的框架。**（赛灵思，挪威科技大学，悉尼大学）\n- **FPGA 能否在加速下一代深度神经网络方面超越 GPU？**（英特尔）\n- **使用软件可编程 FPGA 加速二值化卷积神经网络。**（康奈尔大学，UCLA，UCSD）\n- **提升基于 OpenCL 的 FPGA 加速器在卷积神经网络中的性能。**（威斯康星大学麦迪逊分校）\n- **在 CPU-FPGA 共享内存系统中对卷积神经网络进行频域加速。**（南加州大学）\n- **优化 FPGA 加速深度卷积神经网络中的循环操作与数据流。**（亚利桑那州立大学）\n\n### 2017 ISSCC\n- **面向智能嵌入式系统的 2.9TOPS\u002FW 深度卷积神经网络 SoC（基于 FD-SOI 28nm 工艺）。**（意法半导体）\n- **DNPU：面向通用深度神经网络的 8.1TOPS\u002FW 可重构 CNN-RNN 处理器。**（韩国科学技术院）\n- **ENVISION：支持子字并行、计算精度-电压-频率可扩展的卷积神经网络处理器（0.26–10TOPS\u002FW，28nm FDSOI）。**（鲁汶大学）\n- **配备 270KB 片上权重存储的 288µW 可编程深度学习处理器，采用非均匀内存层次结构，适用于移动智能设备。**（密歇根大学，CubeWorks）\n- **面向物联网应用的 28nm SoC，集成 1.2GHz、568nJ\u002F预测的稀疏深度神经网络引擎，容忍 >0.1 的时序错误率。**（哈佛大学）\n- **具备深度神经网络声学模型与语音激活电源门控的可扩展语音识别器。**（麻省理工学院）\n- **0.62mW 超低功耗卷积神经网络人脸识别处理器，与始终开启的 Haar 类人脸检测器 CIS 集成。**（韩国科学技术院）\n\n### 2017 HPCA\n- **FlexFlow：面向卷积神经网络的灵活数据流加速器架构。**（中国科学院）\n- **PipeLayer：基于 ReRAM 的流水线式深度学习加速器。**（匹兹堡大学，南加州大学）\n- **面向不同 GPU 微架构的普及化与用户满意的 CNN。**（佛罗里达大学）\n  - *CNN 满意度（SoC）是 SoCtime、SoCaccuracy 与能耗的综合指标。*\n  - *P-CNN 框架由离线编译与运行时管理组成。*\n    - *离线编译：通常优化运行时间，并为运行时阶段生成调度配置。*\n    - *运行时管理：通过精度调优生成调优表，并在长期执行过程中校准精度+运行时间（选择最佳调优表）。*\n- **支持以加速器为中心架构的地址转换。**（UCLA）\n\n### 2017 ASPLOS\n- **Tetris：利用 3D 内存实现可扩展且高效的神经网络加速。**（斯坦福大学）\n  - *将累加操作移至靠近 DRAM 存储体的位置。*\n  - *开发一种混合分区方案，在多个加速器上并行化神经网络计算。*\n- **SC-DCNN：使用随机计算的高度可扩展深度卷积神经网络。**（雪城大学，南加州大学，纽约市立学院）\n\n### 2017 ISCA\n- **通过资源分区最大化 CNN 加速器效率。**（石溪大学）\n  - *其 FPL'16 论文的扩展版本。*\n- **张量处理单元（Tensor Processing Unit, TPU）在数据中心内的性能分析。**（Google）\n- **SCALEDEEP：面向深度网络学习与评估的可扩展计算架构。**（普渡大学，英特尔）\n  - *提出一种完整的系统（服务器节点）架构，重点解决 DNN 训练中的层内与层间异构性挑战。*\n- **SCNN：面向压缩稀疏卷积神经网络的加速器。**（NVIDIA、MIT、加州大学伯克利分校、斯坦福大学）\n- **Scalpel：根据底层硬件并行性定制 DNN 剪枝策略。**（密歇根大学，ARM）\n- 异步低精度随机梯度下降的理解与优化。（斯坦福大学）\n- LogCA：面向硬件加速器的高层性能模型。（AMD，威斯康星大学麦迪逊分校）\n- APPROX-NoC：面向片上网络（Network-On-Chip, NoC）架构的数据近似框架。（德州农工大学）\n\n### 2017 FCCM\n- **Escher：通过灵活缓冲最小化片外传输的 CNN 加速器。**（石溪大学）\n- **为高效 FPGA 实现定制神经网络。**\n- **评估 FPGA 上卷积神经网络的快速算法。**\n- **FP-DNN：基于 RTL-HLS 混合模板自动将深度神经网络映射到 FPGA 的框架。**（北京大学、香港科技大学、微软亚洲研究院、UCLA）\n  - *计算密集部分：基于 RTL 的通用矩阵乘法内核。*\n  - *层特定部分：基于 HLS 的控制逻辑。*\n  - *内存密集部分：多种降低 DRAM 带宽需求的技术。*\n- FPGA 加速的稠密线性机器学习：精度与收敛性的权衡。\n- 使用 DCT 插值实现 Tanh 函数的可配置 FPGA 实现。\n\n### 2017 DAC\n- **Deep^3：利用三级并行性实现高效深度学习。**（加州大学圣地亚哥分校，莱斯大学）\n- **实时计算遇上近似计算：一种弹性深度学习加速器设计，在服务质量（QoS）与结果质量（QoR）之间自适应权衡。**（中科院）\n  - *我不确定所提出的调优场景和方向是否足够合理以找到可行解。*\n- **探索用于 FPGA 加速深度卷积神经网络的异构算法。**（北京大学、香港中文大学、商汤科技）\n- **高精度无乘法器深度神经网络的软硬件协同设计。**（布朗大学）\n- **面向二值权重卷积神经网络的核分解架构。**（韩国科学技术院）\n- **基于频域计算的节能型卷积神经网络训练加速器设计。**（佐治亚理工学院）\n- **新型随机计算乘法器及其在深度神经网络中的应用。**（蔚山国立科学技术院）\n- **TIME：面向忆阻器（Memristor）深度神经网络的内存中训练架构。**（清华大学，加州大学圣塔芭芭拉分校）\n- **面向 RRAM 神经计算系统的容错训练与在线故障检测。**（清华大学，杜克大学）\n- **自动化生成与优化脉动阵列以实现高吞吐卷积神经网络。**（北京大学、UCLA、Falcon）\n- **迈向全系统级能效-精度权衡：以近似智能相机系统为例。**（普渡大学）\n  - *协同调节组件级近似参数，以实现系统级能效与精度的权衡。*\n- 面向近阈值计算（Near Threshold Computing, NTC）的误差传播感知时序松弛。（卡尔斯鲁厄理工学院）\n- RESPARC：采用忆阻交叉阵列的可重构节能架构，用于深度脉冲神经网络。（普渡大学）\n- 高缺陷率下拯救忆阻器神经形态设计。（匹兹堡大学，惠普实验室，杜克大学）\n- Group Scissor：将神经形态计算设计扩展至大型神经网络。（匹兹堡大学，杜克大学）\n- 面向老化诱导近似的探索。（卡尔斯鲁厄理工学院，德克萨斯大学奥斯汀分校）\n- SABER：面向容错电路设计的近似位选择方法。（明尼苏达大学，德州农工大学）\n- 利用迭代训练控制近似计算的质量权衡。（上海交通大学，香港中文大学）\n\n### 2017 DATE\n- **DVAFS：通过动态电压-精度-频率缩放（Dynamic-Voltage-Accuracy-Frequency-Scaling）换取计算精度以节省能耗。**（鲁汶大学）\n- **加速器友好的神经网络训练：学习 RRAM 交叉阵列中的工艺偏差与缺陷。**（上海交通大学，匹兹堡大学，Lynmax Research）\n- **一种新颖的零权重\u002F激活感知卷积神经网络硬件架构。**（首尔国立大学）\n  - *解决由零值引起的负载不均衡问题。*\n- **理解精度量化对神经网络精度与能耗的影响。**（布朗大学）\n- **卷积神经网络 FPGA 加速器的设计空间探索。**（三星，蔚山国立科学技术院，首尔国立大学）\n- **MoDNN：面向深度神经网络的本地分布式移动计算系统。**（匹兹堡大学，乔治梅森大学，马里兰大学）\n- **Chain-NN：用于加速深度卷积神经网络的节能一维链式架构。**（早稻田大学）\n- **LookNN：无需乘法的神经网络。**（加州大学圣地亚哥分校）\n  - *聚类权重并使用查找表（LUT）避免乘法运算。*\n- 基于位重要性驱动逻辑压缩的节能近似乘法器设计。（纽卡斯尔大学）\n- 重塑时序错误恢复能力以应对近阈值计算系统的瓶颈。（犹他州立大学）\n\n### 2017 VLSI\n- **一款 3.43TOPS\u002FW、48.9pJ\u002F像素、50.1nJ\u002F分类、512 个模拟神经元的稀疏编码神经网络芯片，在 40nm CMOS 工艺下支持片上学习与分类。**（密歇根大学，英特尔）\n- **BRein Memory：一款 13 层、4.2K 神经元\u002F0.8M 突触、支持二值\u002F三值重配置的内存中深度神经网络加速器，采用 65nm CMOS 工艺。**（北海道大学，东京工业大学，庆应义塾大学）\n- **一款 1.06 至 5.09 TOPS\u002FW 可重构混合神经网络处理器，适用于深度学习应用。**（清华大学）\n- **一款 127mW、1.63TOPS 的稀疏时空认知 SoC，用于视频中的动作分类与运动追踪。**（密歇根大学）\n\n### 2017 ICCAD\n- **AEP：一种承载误差的神经网络加速器，兼顾能效与模型保护。**（匹兹堡大学）\n- VoCaM：面向移动端系统的可视化导向卷积神经网络加速。（乔治梅森大学，杜克大学）\n- AdaLearner：面向神经网络的自适应分布式移动学习系统。（杜克大学）\n- MeDNN：面向大规模 DNN 的增强型分布式移动系统，支持优化划分与部署。（杜克大学）\n- TraNNsformer：面向忆阻交叉阵列神经形态系统设计的神经网络变换工具。（普渡大学）\n- 一种闭环设计以增强忆阻器神经网络芯片的权重稳定性。（杜克大学）\n- 对深度神经网络的故障注入攻击。（香港中文大学）\n- ORCHARD：基于近似内存处理的视觉对象识别加速器。（加州大学圣地亚哥分校）\n\n### 2017 HotChips\n- **用于训练深度神经网络（Deep Neural Networks, DNN）的数据流处理芯片。** (Wave Computing)\n- **Brainwave：数据中心规模下持续神经网络的加速方案。** (Microsoft)\n- **DNN ENGINE：面向嵌入式大众市场的 16nm 亚微焦耳级深度神经网络推理加速器。** (哈佛大学, ARM)\n- **DNPU：具备片上立体匹配功能的高能效深度神经网络处理器。** (KAIST)\n- **张量处理单元（Tensor Processing Unit, TPU）评估：面向数据中心的深度神经网络加速器。** (Google)\n- NVIDIA Volta GPU：面向 GPU 计算的可编程性与性能优化。 (NVIDIA)\n- Knights Mill：面向机器学习的英特尔至强融核（Xeon Phi）处理器。 (Intel)\n- XPU：面向多样化工作负载的可编程 FPGA 加速器。 (百度)\n\n### 2017 MICRO\n- **Bit-Pragmatic 深度神经网络计算。** (NVIDIA, 多伦多大学)\n- **CirCNN：利用块循环权重矩阵加速并压缩深度神经网络。** (雪城大学, 纽约市立大学, 南加州大学, 加州州立大学, 东北大学)\n- **DRISA：基于 DRAM 的可重构原位加速器。** (加州大学圣塔芭芭拉分校, 三星)\n- **面向机器学习的横向扩展加速。** (佐治亚理工学院, 加州大学圣地亚哥分校)\n  - 提出 CoSMIC，一个完整的计算栈，包含语言、编译器、系统软件、模板架构和电路生成器，支持大规模可编程学习加速。\n- DeftNN：通过突触向量消除与近计算数据分裂解决 GPU 上 DNN 执行瓶颈。 (密歇根大学, 内华达大学)\n- 数据移动感知的计算划分。 (宾州州立大学, TOBB 经济技术大学)\n  - *在众核系统上划分计算任务以实现近数据处理。*\n\n### 2018 ASPDAC\n- **ReGAN：基于流水线 ReRAM 的生成对抗网络（Generative Adversarial Networks, GAN）加速器。** (匹兹堡大学, 杜克大学)\n- **面向增强可扩展性、能效与可编程性的以加速器为中心的深度学习系统。** (浦项科技大学)\n- **支持用户自定义 CNN 的架构与算法。** (首尔国立大学, 三星)\n- **面向图像超分辨率的 FPGA 卷积神经网络加速器优化。** (西江大学)\n- **运行稀疏与低精度神经网络：当算法遇上硬件。** (杜克大学)\n\n### 2018 ISSCC\n- **55nm 时域混合信号类脑加速器，配备随机突触与嵌入式强化学习功能，适用于自主微型机器人。** (佐治亚理工学院)\n- **迈向边缘机器学习处理。** (Google)\n- **QUEST：采用感应耦合技术堆叠于 96MB 3D SRAM 上的 7.49TOPS 多用途对数量化 DNN 推理引擎，基于 40nm CMOS 工艺。** (北海道大学, Ultra Memory, 庆应义塾大学)\n- **UNPU：支持 1b 至 16b 全可变权重精度的 50.6TOPS\u002FW 统一深度神经网络加速器。** (KAIST)\n- **9.02mW 基于 CNN 立体视觉的实时 3D 手势识别处理器，适用于智能移动设备。** (KAIST)\n- **始终在线的 3.8μJ\u002F86% CIFAR-10 混合信号二值 CNN 处理器，全内存集成于 28nm CMOS 芯片内。** (斯坦福大学, 鲁汶大学)\n- **Conv-RAM：面向低功耗 CNN 机器学习应用的高能效 SRAM，内置卷积计算功能。** (麻省理工学院)\n- **42pJ\u002F决策、3.12TOPS\u002FW 的片上训练型内存内机器学习分类器。** (伊利诺伊大学厄巴纳-香槟分校)\n- **受大脑启发的计算：利用碳纳米管场效应晶体管（FETs）与阻变存储器（RRAM），以超维计算为案例研究。** (斯坦福大学, 加州大学伯克利分校, 麻省理工学院)\n- **65nm 1Mb 非易失性存内计算 ReRAM 宏单元，支持亚 16ns 乘加运算，适用于二值 DNN AI 边缘处理器。** (清华大学新竹校区)\n- **65nm 4Kb 算法相关存内计算 SRAM 单元宏，支持 2.3ns 和 55.8TOPS\u002FW 全并行乘积累加运算，适用于二值 DNN 边缘处理器。** (清华大学新竹校区, 台积电, 电子科技大学, 亚利桑那州立大学)\n- **1μW 语音活动检测器，采用模拟特征提取与数字深度神经网络。** (哥伦比亚大学)\n\n### 2018 HPCA\n- **使忆阻器神经网络加速器更可靠。** (罗切斯特大学)\n- **面向无监督 GAN 深度学习加速的高效微架构设计探索。** (佛罗里达大学)\n- **压缩 DMA 引擎：利用激活稀疏性加速深度神经网络训练。** (浦项科技大学, NVIDIA, 德州大学奥斯汀分校)\n- **原位 AI：面向物联网系统的自主与增量式深度学习。** (佛罗里达大学, 重庆大学, 首都师范大学)\n- RC-NVM：为内存数据库启用对称行列内存访问。 (北京大学, 国防科技大学, 杜克大学, 加州大学洛杉矶分校, 宾州州立大学)\n- GraphR：利用 ReRAM 加速图处理。 (杜克大学, 南加州大学, 纽约州立大学宾汉姆顿分校)\n- GraphP：通过高效数据分区减少基于 PIM 的图处理通信开销。 (清华大学, 南加州大学, 斯坦福大学)\n- PM3：面向存内计算的功耗建模与功耗管理。 (北京大学)\n\n### 2018 ASPLOS\n- **弥合神经网络与类脑硬件之间的鸿沟：使用神经网络编译器。** (清华大学, 加州大学圣塔芭芭拉分校)\n- **MAERI：通过可重构互连实现 DNN 加速器上的灵活数据流映射。** (佐治亚理工学院)\n  - *更高 PE 利用率：使用增强型归约树（可重构互连）构建任意尺寸的虚拟神经元。*\n- **VIBNN：贝叶斯神经网络的硬件加速。** (雪城大学, 南加州大学)\n- 利用动态热能收集技术为智能手机中的移动应用重用能量。 (贵州大学, 佛罗里达大学)\n- Potluck：面向计算密集型移动应用的跨应用近似去重。 (耶鲁大学)\n\n### 2018 VLSI\n- **STICKER：0.41–62.1 TOPS\u002FW 8bit 神经网络处理器，兼容多稀疏度卷积阵列，并支持全连接层在线调优加速。** (清华大学)\n- **2.9TOPS\u002FW 可重构稠密\u002F稀疏矩阵乘法加速器，统一支持 INT8\u002FINT16\u002FFP16 数据通路，基于 14nm Tri-gate CMOS 工艺。** (英特尔)\n- **可扩展的多太拉 OPS 深度学习处理器核心，支持 AI 训练与推理。** (IBM)\n- **超高效能可重构处理器，支持二值\u002F三值权重深度神经网络，基于 28nm CMOS 工艺。** (清华大学)\n- **B‐Face：0.2 mW 基于 CNN 的人脸识别处理器，集成人脸对齐功能，适用于移动用户身份识别。** (KAIST)\n- **141 uW、2.46 pJ\u002F神经元的二值卷积神经网络自学习语音识别处理器，基于 28nm CMOS 工艺。** (清华大学)\n- **混合信号二值卷积神经网络加速器，集成稠密权重存储与乘法运算以减少数据移动。** (普林斯顿大学)\n- **PhaseMAC：14 TOPS\u002FW 8bit 基于 GRO 的相域 MAC 电路，适用于传感器内计算的深度学习加速器。** (东芝)\n\n### 2018 FPGA\n- **C-LSTM：在 FPGA 上利用结构化压缩技术实现高效的 LSTM。**（北京大学、雪城大学、纽约市立大学）\n- **DeltaRNN：一种高能效的循环神经网络加速器。**（苏黎世联邦理工学院、BenevolentAI）\n- **面向 2D 和 3D CNN 加速的统一模板架构设计。**（国防科技大学）\n- **为 Intel HARPv2 Xeon+FPGA 平台定制的矩阵乘法框架——深度学习案例研究。**（悉尼大学、英特尔）\n- **用于在 FPGA 上生成高吞吐量 CNN 实现的框架。**（南加州大学）\n- Liquid Silicon：基于 RRAM 技术的数据中心可重构架构。（威斯康星大学麦迪逊分校）\n\n### 2018 ISCA\n- **RANA：利用刷新优化的嵌入式 DRAM 实现高效神经加速。**（清华大学）\n- **Brainwave：支持实时 AI 的可配置云规模 DNN 处理器。**（微软）\n- **PROMISE：面向机器学习算法的可编程混合信号加速器端到端设计。**（伊利诺伊大学厄巴纳-香槟分校）\n- **通过利用输入相似性实现 DNN 中的计算复用。**（加泰罗尼亚理工大学）\n- **GANAX：面向生成对抗网络（GAN）的统一 SIMD-MIMD 加速架构。**（佐治亚理工学院、IPM、高通、加州大学圣地亚哥分校、伊利诺伊大学厄巴纳-香槟分校）\n- **SnaPEA：预测性早期激活以减少深度卷积神经网络中的计算量。**（加州大学圣地亚哥分校、佐治亚理工学院、高通）\n- **UCNN：通过权重重复利用实现深度神经网络中的计算复用。**（伊利诺伊大学厄巴纳-香槟分校、英伟达）\n- **基于异常值感知低精度计算的高能效神经网络加速器。**（首尔国立大学）\n- **基于预测执行的深度神经网络加速方法。**（佛罗里达大学）\n- **Bit Fusion：面向深度神经网络加速的位级动态可组合架构。**（佐治亚理工学院、ARM、加州大学圣地亚哥分校）\n- **Gist：面向深度神经网络训练的高效数据编码方法。**（密歇根大学、微软、多伦多大学）\n- **DNN 剪枝的阴暗面。**（加泰罗尼亚理工大学）\n- **Neural Cache：在缓存内进行位串行加速的深度神经网络架构。**（密歇根大学）\n- EVA^2：利用时序冗余提升实时计算机视觉性能。（康奈尔大学）\n- Euphrates：面向低功耗移动连续视觉的算法-SoC 协同设计。（罗切斯特大学、佐治亚理工学院、ARM）\n- 面向高效脉冲神经网络仿真的特征驱动与空间折叠数字神经元。（浦项科技大学\u002F伯克利、首尔国立大学）\n- 空间-时间代数：新皮层计算模型。（威斯康星大学）\n- 利用计算复用架构扩展数据中心加速器。（普林斯顿大学）\n   - *在加速器中增加基于 NVM（非易失性存储器）的存储层，实现计算复用。*\n- 在忆阻器加速器上实现科学计算。（罗切斯特大学）\n\n### 2018 DATE\n- **MATIC：通过容忍错误实现高效低压神经网络加速器。**（华盛顿大学）\n   - *学习容忍由 SRAM 电压缩放引起的错误，在 65nm 流片测试芯片上验证。*\n- **通过平衡计算负载最大化 LSTM 加速器系统性能。**（浦项科技大学）\n   - *提出稀疏矩阵格式以均衡计算负载，并在 LSTM 上验证。*\n- **CCR：面向稀疏神经网络加速器的简洁卷积规则。**（中国科学院）\n   - *将卷积分解为多个稠密和零核，以节省稀疏性开销。*\n- **Block Convolution：面向 FPGA 上大规模 CNN 推理的内存高效方案。**（中国科学院）\n- **moDNN：面向 GPU 的内存最优 DNN 训练方法。**（圣母大学、中国科学院）\n- HyperPower：面向神经网络的功耗与内存受限超参数优化。（卡内基梅隆大学、谷歌）\n\n### 2018 DAC\n- **Compensated-DNN：通过补偿量化误差实现低功耗低精度深度神经网络。**（**最佳论文**，普渡大学、IBM）\n  - *提出一种新的定点表示法：带误差补偿的定点数（FPEC）：计算位 + 表示量化误差的补偿位。*\n  - *提出一种低开销稀疏补偿方案，用于估计 MAC 设计中的误差。*\n- **利用原位低精度迁移学习校准模拟神经网络处理器的工艺偏差。**（清华大学）\n- **DPS：面向基于随机计算的深度神经网络的动态精度缩放。**（韩国蔚山国立科学技术院）\n- **DyHard-DNN：通过动态硬件重配置进一步加速 DNN。**（弗吉尼亚大学）\n- **探索深度学习处理器的可编程性：从架构到张量化。**（华盛顿大学）\n- **LCP：面向 FPGA 上 Inception 和 ResNet 网络加速的层簇并行映射机制。**（清华大学）\n- **面向二值权重卷积神经网络的核分解架构。**（清华大学）\n- **Ares：用于量化深度神经网络鲁棒性的框架。**（哈佛大学）\n- **ThUnderVolt：通过激进电压降压和时序错误容忍实现高能效深度学习加速器。**（纽约大学、印度理工学院坎普尔分校）\n- **Loom：利用权重与激活精度加速卷积神经网络。**（多伦多大学）\n- **为二值神经网络并行化 SRAM 阵列并定制位单元。**（亚利桑那大学）\n- **基于 ReRAM 的类脑计算系统的热感知优化。**（西北大学）\n- **SNrram：基于阻变随机存取存储器（ReRAM）的高效稀疏神经网络计算架构。**（清华大学、加州大学圣塔芭芭拉分校）\n- **Long Live TIME：通过结构化梯度稀疏化提升存内训练引擎寿命。**（清华大学、中科院、麻省理工学院）\n- **带宽高效的深度学习。**（麻省理工学院、斯坦福大学）\n- **面向嵌入式视觉应用的深度神经网络与神经网络加速器协同设计。**（伯克利大学）\n- **Sign-Magnitude SC：在随机计算中免费获得 10 倍精度提升的深度神经网络方法。**（韩国蔚山国立科学技术院）\n- **DrAcc：基于 DRAM 的高精度 CNN 推理加速器。**（国防科技大学、印第安纳大学、匹兹堡大学）\n- **使用多级嵌入式非易失性存储器（eNVM）的片上深度神经网络存储。**（哈佛大学）\n- VRL-DRAM：通过可变刷新延迟提升 DRAM 性能。（德雷塞尔大学、苏黎世联邦理工学院）\n\n### 2018 HotChips\n- **ARM 第一代机器学习处理器。**（ARM）\n- **NVIDIA 深度学习加速器。**（英伟达）\n- **Xilinx 张量处理器：面向 Xilinx FPGA 的推理引擎、网络编译器与运行时系统。**（赛灵思）\n- Tachyum 云芯片：面向超大规模工作负载、深度机器学习、通用、符号及生物 AI。（Tachyum）\n- SMIV：采用 16nm 工艺、具备高效灵活 DNN 加速能力的 SoC，适用于智能物联网设备。（ARM）\n- NVIDIA Xavier 系统级芯片。（英伟达）\n- Xilinx Project Everest：软硬件可编程引擎。（赛灵思）\n\n### 2018 ICCAD\n- **Tetris：为机器学习加速器重新架构卷积神经网络（CNN）计算。** (CAS)\n- **3DICT：面向 3D XPoint ReRAM 的可靠且支持 QoS 的移动端存内计算（Process-In-Memory）架构，适用于基于查找的 CNN。** (印第安纳大学布卢明顿分校, 佛罗里达国际大学)\n- **TGPA：面向低延迟 CNN 推理的 Tile 粒度流水线架构。** (北京大学, 加州大学洛杉矶分校, Falcon)\n- **NID：在通用 DRAM 中处理二值卷积神经网络。** (韩国科学技术院 KAIST)\n- **基于深度 Q 学习的自适应精度 SGD 框架。** (北京大学)\n- **使用任意对数底数的对数数据表示法高效硬件加速 CNN。** (罗伯特·博世有限公司 Robert Bosch GmbH)\n- **C-GOOD：面向设备端深度学习优化的 C 代码生成框架。** (首尔国立大学 SNU)\n- **基于混合尺寸交叉开关阵列的 RRAM CNN 加速器，采用重叠映射方法。** (清华大学 THU)\n- **FCN-Engine：在经典 CNN 处理器中加速反卷积层。** (哈尔滨工业大学 HUT, CAS, 新加坡国立大学 NUS)\n- **DNNBuilder：用于 FPGA 上构建高性能 DNN 硬件加速器的自动化工具。** (伊利诺伊大学厄巴纳-香槟分校 UIUC)\n- **DIMA：深度可分离卷积神经网络的存内加速器。** (中佛罗里达大学 Univ. of Central Florida)\n- **EMAT：面向迁移学习的高效多任务 ReRAM 架构。** (杜克大学 Duke)\n- **FATE：面向低功耗 DNN 加速器设计的快速准确时序错误预测框架。** (纽约大学 NYU)\n- **面向能量受限图像分类任务的自适应神经网络设计。** (卡内基梅隆大学 CMU)\n- 面向嵌入式系统的深度神经网络水印技术。 (加州大学洛杉矶分校 UCLA)\n- 防御性 Dropout：在对抗攻击下强化深度神经网络。 (东北大学 Northeastern Univ., 波士顿大学 Boston Univ., 佛罗里达国际大学)\n- 面向 2.5D 系统中网络设计与优化的跨层方法论。 (波士顿大学, 加州大学圣地亚哥分校 UCSD)\n\n### 2018 MICRO\n- **应对稀疏神经网络中的不规则性：一种软硬件协同方法。** (中国科学技术大学 USTC, CAS)\n- **Diffy：一种无重复计算的差分深度神经网络加速器。** (多伦多大学 University of Toronto)\n- **超越内存墙：面向深度学习的以内存为中心的高性能计算系统案例研究。** (韩国科学技术院 KAIST)\n- **面向移动 GPU 的内存友好型长短期记忆网络（LSTM）。** (休斯顿大学, 首都师范大学)\n- **一种网络中心化的硬件\u002F算法协同设计，用于加速分布式深度神经网络训练。** (UIUC, THU, SJTU, 英特尔 Intel, UCSD)\n- **PermDNN：基于置换对角矩阵的高效压缩深度神经网络架构。** (纽约市立大学 City University of New York, 明尼苏达大学, 南加州大学 USC)\n- **GeneSys：通过硬件中神经网络演化实现持续学习。** (佐治亚理工学院 Georgia Tech)\n- **面向能效神经网络训练的存内计算：一种异构方法。** (马德里康普顿斯大学 UCM, UCSD, 加州大学圣克鲁兹分校 UCSC)\n  - 调度由 CPU 和异构存内计算单元（固定功能逻辑 + 可编程 ARM 核心）提供的计算资源，以优化能效和硬件利用率。\n- **LerGAN：一种零值消除、低数据移动、基于存内计算的 GAN 架构。** (清华大学 THU, 佛罗里达大学)\n- **通过分布式近数据处理实现 Winograd 层的多维并行训练。** (KAIST)\n  - 将 Winograd 应用于训练，扩展传统数据并行性，新增“Tile 内并行”维度。在此模式下，节点被划分为若干组，权重更新通信仅在组内独立进行。该方法在训练集群中展现出更优的可扩展性，因为总通信量不随节点数量增加而增长。\n- **SCOPE：面向 DRAM 原位加速器的随机计算引擎。** (加州大学圣塔芭芭拉分校 UCSB, 三星 Samsung)\n- **Morph：面向 3D CNN 视频理解的灵活加速架构。** (UIUC)\n- 多线程可重构粗粒度阵列中的线程间通信。 (以色列理工学院 Technion)\n- 面向可重构硬件上动态并行算法加速的架构框架。 (康奈尔大学 Cornell)\n\n### 2019 ASPDAC\n- **一种 N 路组关联架构及稀疏数据组关联负载均衡算法，用于稀疏 CNN 加速器。** (清华大学 THU)\n- **TNPU：一种高效的卷积神经网络训练加速器架构。** (中科院计算所 ICT)\n- **NeuralHMC：一种基于 HMC 的高效深度神经网络加速器。** (匹兹堡大学, 杜克大学)\n- **P3M：一种基于存内计算（PIM）的神经网络模型保护方案，用于深度学习加速器。** (ICT)\n- GraphSAR：面向大规模图处理的稀疏感知存内计算架构，基于 ReRAM。 (清华大学, 麻省理工学院 MIT, 伯克利 Berkeley)\n\n### 2019 ISSCC\n- **一款 8nm 旗舰移动 SoC 中的 11.5TOPS\u002FW、1024-MAC 蝴蝶结构双核稀疏感知神经处理单元。** (三星 Samsung)\n- **一款符合 ISO26262 标准、面向汽车应用的 20.5TOPS 与 217.3GOPS\u002Fmm² 多核 SoC，集成 DNN 加速器与图像信号处理器。** (东芝 Toshiba)\n- **一款 879GOPS、243mW、80fps VGA 全视觉 CNN-SLAM 处理器，适用于广域自主探索。** (密歇根大学 Michigan)\n- **一款 2.1TFLOPS\u002FW 的移动深度强化学习加速器，配备可转置 PE 阵列与经验压缩机制。** (KAIST)\n- **一款 65nm 工艺、0.39 至 140.3TOPS\u002FW、1 至 12 位统一神经网络处理器，采用块循环启用的转置域加速技术，实现 8.1× 更高的 TOPS\u002Fmm²，并基于 6T HBST-TRAM 的二维数据复用架构。** (清华大学 THU, 国立清华大学, 东北大学)\n- **一款 65nm、236.5nJ\u002F分类的类脑处理器，片上学习仅带来 7.5% 能量开销，采用直接脉冲反馈机制。** (首尔国立大学 SNU)\n- **LNPU：一款 25.3TFLOPS\u002FW 的稀疏深度神经网络学习处理器，支持细粒度混合精度（FP8-FP16）。** (KAIST)\n- 一款 1Mb 多比特 ReRAM 存算一体宏单元，面向基于 CNN 的 AI 边缘处理器，具备 14.6ns 并行 MAC 计算时间。 (国立清华大学)\n- Sandwich-RAM：一种基于脉宽调制的高能效存内二值权重网络（BWN）架构。 (东南大学, 博兴电子, 清华大学)\n- 一款 Twin-8T SRAM 存算一体宏单元，支持多位 CNN 机器学习。 (国立清华大学, 电子科技大学, 亚利桑那州立大学 ASU, 佐治亚理工学院)\n- 一款可重构 RRAM 物理不可克隆函数（PUF），利用后工艺随机源，原生误码率低于 6×10⁻⁶。 (清华大学, 国立清华大学, 佐治亚理工学院)\n- 一款 65nm、1.1 至 9.1TOPS\u002FW 的混合数字-模拟计算平台，用于加速基于模型与无模型的群体机器人系统。 (佐治亚理工学院)\n- 一款支持位串行整数\u002F浮点运算的计算型 SRAM，用于可编程存内向量加速。 (密歇根大学)\n- 一款全数字时域 CNN 引擎，采用双向内存延迟线，面向高能效边缘计算。 (德州大学奥斯汀分校 UT Austin)\n\n### 2019 HPCA\n- **HyPar: 面向深度学习加速器阵列的混合并行性（Hybrid Parallelism）设计。** (杜克大学, 南加州大学)\n- **E-RNN: FPGA 上循环神经网络（Recurrent Neural Networks, RNN）的高效设计优化。** (雪城大学, 东北大学, 佛罗里达国际大学, 南加州大学, 布法罗大学)\n- **Bit Prudent In-Cache 加速深度卷积神经网络（Deep Convolutional Neural Networks）。** (密歇根大学, 英特尔)\n- **Shortcut Mining: 在深度卷积神经网络（DCNN）加速器中挖掘跨层捷径复用。** (俄亥俄州立大学)\n- **NAND-Net: 最小化基于内存计算（In-Memory Processing）的二值神经网络（Binary Neural Networks）的计算复杂度。** (韩国科学技术院)\n- **Kelp: 面向机器学习平台加速器的服务质量（QoS）保障机制。** (微软, 谷歌, 德克萨斯大学奥斯汀分校)\n- **Facebook 的机器学习：边缘推理（Inference at the Edge）的理解。** (Facebook)\n- The Accelerator Wall: 芯片专用化的极限。 (普林斯顿大学)\n\n### 2019 ASPLOS\n- **FA3C: FPGA 加速的深度强化学习（Deep Reinforcement Learning）。** (弘益大学, 首尔国立大学)\n- **PUMA: 面向机器学习推理的可编程超高效忆阻器（Memristor）加速器。** (普渡大学, 伊利诺伊大学厄巴纳-香槟分校, 惠普)\n- **FPSA: 面向可重构 ReRAM 神经网络加速器架构的全系统栈解决方案。** (清华大学, 加州大学圣塔芭芭拉分校)\n- **Bit-Tactical: 利用神经网络中的数值与位稀疏性（Value and Bit Sparsity）的软硬件协同方法。** (多伦多大学, 英伟达)\n- **TANGRAM: 面向可扩展神经网络加速器的优化粗粒度数据流（Coarse-Grained Dataflow）。** (斯坦福大学)\n- **Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization.** (哈佛大学)\n- **Split-CNN: 通过拆分卷积神经网络中的窗口操作以优化内存系统。** (IBM, 庆北国立大学)\n- **HOP: 异构感知的去中心化训练（Heterogeneity-Aware Decentralized Training）。** (南加州大学, 清华大学)\n- **Astra: 利用可预测性优化深度学习。** (微软)\n- **ADMM-NN: 使用乘子交替方向法（Alternating Direction Methods of Multipliers）的深度神经网络算法-硬件协同设计框架。** (东北大学, 雪城大学, 纽约州立大学布法罗分校, 南加州大学)\n- **DeepSigns: 保护深度神经网络所有权的端到端水印框架。** (加州大学圣地亚哥分校)\n\n### 2019 ISCA\n- **Sparse ReRAM Engine: 在压缩神经网络中联合探索激活值与权重稀疏性（Activation and Weight Sparsity）。** (台湾国立大学, 中央研究院, Macronix)\n- **MnnFast: 面向内存增强型神经网络（Memory-Augmented Neural Networks）的快速可扩展系统架构。** (浦项科技大学, 首尔国立大学)\n- **TIE: 基于张量列车（Tensor Train）的高能效深度神经网络推理引擎。** (罗格斯大学, 南京大学, 南加州大学)\n- **Accelerating Distributed Reinforcement Learning with In-Switch Computing.** (伊利诺伊大学厄巴纳-香槟分校)\n- **Eager Pruning: 支持深度神经网络快速训练的算法与架构。** (佛罗里达大学)\n- **Laconic Deep Learning Inference Acceleration.** (多伦多大学)\n- **DeepAttest: 深度神经网络的端到端认证框架。** (加州大学圣地亚哥分校)\n- **A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron Superconducting Technology.** (东北大学, 横滨国立大学, 南加州大学, 阿尔伯塔大学)\n- **Fractal Machine Learning Computers.** (中科院计算所)\n- **FloatPIM: 高精度深度神经网络训练的内存内加速（In-Memory Acceleration）。** (加州大学圣地亚哥分校)\n- Energy-Efficient Video Processing for Virtual Reality. (伊利诺伊大学厄巴纳-香槟分校, 罗切斯特大学)\n- Scalable Interconnects for Reconfigurable Spatial Architectures. (斯坦福大学)\n- CoNDA: 通过优化数据移动实现高效的近数据加速器通信。 (卡内基梅隆大学, 苏黎世联邦理工学院)\n\n### 2019 DAC\n- **Accuracy vs. Efficiency: 通过 FPGA 实现感知的神经架构搜索（Neural Architecture Search）同时实现精度与效率。** (华东师范大学, 匹兹堡大学, 重庆大学, 加州大学欧文分校, 圣母大学)\n- **FPGA\u002FDNN Co-Design: 面向边缘物联网智能的高效设计方法论。** (伊利诺伊大学厄巴纳-香槟分校, IBM, Inspirit IoT)\n- **An Optimized Design Technique of Low-Bit Neural Network Training for Personalization on IoT Devices.** (韩国科学技术院)\n- **ReForm: 面向移动设备的静态与动态资源感知 DNN 重配置框架。** (乔治梅森大学, 克拉克森大学)\n- **DRIS-3: 基于故障分析的 3D 堆叠存储器中深度神经网络可靠性提升方案。** (成均馆大学)\n- **ZARA: 基于 3D ReRAM 的生成对抗网络（Generative Adversarial Networks）无零数据流加速器。** (杜克大学)\n- **BitBlade: 基于位级累加的面积与能效优化、精度可扩展神经网络加速器。** (浦项科技大学)\n- X-MANN: 面向内存增强型神经网络的交叉开关架构。 (普渡大学, 英特尔)\n- Thermal-Aware Design and Management for Search-based In-Memory Acceleration. (加州大学圣地亚哥分校)\n- An Energy-Efficient Network-on-Chip Design using Reinforcement Learning. (乔治华盛顿大学)\n- Designing Vertical Processors in Monolithic 3D. (伊利诺伊大学厄巴纳-香槟分校)\n\n### 2019 MICRO\n- **Wire-Aware Architecture and Dataflow for CNN Accelerators.** (犹他大学)\n- **ShapeShifter: 在深度学习中实现细粒度数据宽度自适应。** (多伦多大学)\n- **Simba: 基于多芯片模块（Multi-Chip-Module）架构的深度学习推理扩展方案。** (英伟达)\n- **ZCOMP: 利用向量扩展减少 DNN 跨层内存占用。** (谷歌, 英特尔)\n- **Boosting the Performance of CNN Accelerators with Dynamic Fine-Grained Channel Gating.** (康奈尔大学)\n- **SparTen: 面向卷积神经网络的稀疏张量加速器。** (普渡大学)\n- **EDEN: 利用容错神经网络（Error-Resilient Neural Networks）实现 DNN 推理的近似 DRAM。** (苏黎世联邦理工学院, 卡内基梅隆大学)\n- **eCNN: 面向边缘推理的基于块结构的高度并行 CNN 加速器。** (国立清华大学)\n- **TensorDIMM: 面向深度学习嵌入与张量运算的实用近内存处理架构。** (韩国科学技术院)\n- **Understanding Reuse, Performance, and Hardware Cost of DNN Dataflows: A Data-Centric Approach.** (佐治亚理工学院, 英伟达)\n- **MaxNVM: 通过稀疏编码与错误缓解最大化 DNN 存储密度与推理效率。** (哈佛大学, Facebook)\n- **Neuron-Level Fuzzy Memoization in RNNs.** (加泰罗尼亚理工大学)\n- **Manna: 内存增强型神经网络加速器。** (普渡大学, 英特尔)\n- eAP: 面向自动机处理的可扩展高效内存内加速器。 (弗吉尼亚大学)\n- ComputeDRAM: 使用商用 DRAM 实现内存内计算。 (普林斯顿大学)\n- ExTensor: 稀疏张量代数加速器。 (伊利诺伊大学厄巴纳-香槟分校, 英伟达)\n- Efficient SpMV Operation for Large and Highly Sparse Matrices Using Scalable Multi-Way Merge Parallelization. (卡内基梅隆大学)\n- Sparse Tensor Core: 面向现代 GPU 上向量级稀疏神经网络的算法-硬件协同设计。 (加州大学圣塔芭芭拉分校, 阿里巴巴)\n- DynaSprint: 带动态效用与热管理的微架构冲刺机制。 (滑铁卢大学, ARM, 杜克大学)\n- MEDAL: 面向 DNA 种子算法的可扩展 DIMM 基近数据处理加速器。 (加州大学圣塔芭芭拉分校, 中科院计算所)\n- Tigris: 面向点云三维感知的架构与算法。 (罗切斯特大学)\n- ASV: 加速立体视觉系统。 (罗切斯特大学)\n- Alleviating Irregularity in Graph Analytics Acceleration: a Hardware\u002FSoftware Co-Design Approach. (加州大学圣塔芭芭拉分校, 中科院计算所)\n\n### 2019 ICCAD\n- **Zac：面向嵌入式设备的量化深度神经网络自动优化与部署。**（北京大学）\n- **NAIS：神经架构与实现搜索及其在自动驾驶中的应用。**（伊利诺伊大学厄巴纳-香槟分校）\n- **MAGNet：面向神经网络的模块化加速器生成器。**（NVIDIA）\n- **ReDRAM：一种用于加速批量位级操作的可重构存内处理（Processing-in-DRAM）平台。**（亚利桑那州立大学）\n- **Accelergy：一种面向加速器设计的架构级能耗估算方法。**（麻省理工学院）\n\n### 2019 ASSCC\n- **一款每轮训练仅耗电 47.4µJ 的可训练深度卷积神经网络加速器，支持智能设备上的原位个性化。**（韩国科学技术院）\n- **一款能效达 2.25 TOPS\u002FW、完全集成且支持片上训练的深度 CNN 学习处理器。**（台湾国立清华大学）\n- **一种稀疏自适应 CNN 处理器，采用面积\u002F性能均衡的 N 路组相联 PE 阵列，并由碰撞感知调度器辅助。**（清华大学，东北大学）\n- 一款用于无刷新 DSP 应用的 24 Kb 单阱混合 3T 增益单元 eDRAM，在 28nm FD-SOI 工艺中采用体偏置技术。（洛桑联邦理工学院）\n\n### 2019 VLSI\n- **基于 6T SRAM 阵列的高面积效率、抗工艺偏差存内二值神经网络（BNN）计算架构。**（浦项科技大学）\n- **一款每神经元功耗仅 5.1pJ、推理延迟 127.3us 的 RNN 语音识别处理器，采用 65nm CMOS 工艺并集成 16 个存内计算（Computing-in-Memory）SRAM 宏单元。**（清华大学，台湾国立清华大学，清微智能）\n- **一款能效达 0.11 pJ\u002FOp、算力范围 0.32–128 TOPS、可扩展多芯片模块（Multi-Chip Module）架构的深度神经网络加速器，采用 16nm 工艺并配备地参考信号传输技术。**（NVIDIA）\n- **SNAP：一款面向非结构化稀疏深度神经网络推理的稀疏神经加速处理器，在 16nm CMOS 工艺下实现 1.67 – 21.55TOPS\u002FW 能效。**（密歇根大学，NVIDIA）\n- **一款全高清 60 fps 的 CNN 超分辨率处理器，采用基于选择性缓存的层融合技术，专为移动设备设计。**（韩国科学技术院）\n- **一款能效达 1.32 TOPS\u002FW 的深度神经网络学习处理器，采用基于直接反馈对齐（Direct Feedback Alignment）的异构核心架构。**（韩国科学技术院）\n- 在低功耗边缘设备中将存内计算（Computing-In-Memory）与感内处理（Processing-In-Sensor）集成到卷积神经网络加速器的设计考量。（台湾国立清华大学，国立中兴大学）\n- 基于计算存储器（Computational Memory）的深度神经网络推理与训练。（IBM，洛桑联邦理工学院，苏黎世联邦理工学院等）\n- 一款基于三值、位可扩展、能效达 8.80 TOPS\u002FW 的 CNN 加速器 A95，采用多核存内处理架构，突触密度高达 896K\u002Fmm²。（瑞萨电子）\n- 利用铁电隧道结适度随机电导切换实现存内强化学习。（东芝）\n\n### 2019 HotChips\n- **MLPerf：由学术界与工业界合作开发的机器学习基准测试套件。**（MLPerf）\n- **Zion：Facebook 下一代大内存统一训练平台。**（Facebook）\n- **一种从纳米级到高性能计算均可扩展的统一神经网络计算架构。**（华为）\n- **大规模深度学习训练——Spring Crest 深度学习加速器。**（英特尔）\n- **Spring Hill——英特尔数据中心推理芯片。**（英特尔）\n- **晶圆级深度学习。**（Cerebras）\n- **Habana Labs 的 AI 训练扩展方案。**（Habana）\n- **Ouroboros：面向嵌入式设备 TTS 应用的 WaveNet 推理引擎。**（阿里巴巴）\n- **一款能效达 0.11 pJ\u002FOp、算力范围 0.32–128 TOPS、可扩展多芯片模块架构的深度神经网络加速器，采用高生产率 VLSI 设计方法学。**（NVIDIA）\n- **Xilinx Versal\u002FAI 引擎。**（Xilinx）\n- 一款面向位可扩展存内计算的可编程嵌入式微处理器。（普林斯顿大学）\n\n### 2019 FPGA\n- **Synetgy：面向嵌入式 FPGA 上 ConvNet 加速器的算法-硬件协同设计框架。**（清华大学，伯克利，都灵理工大学，Xilinx）\n- **REQ-YOLO：一种面向 FPGA 的资源感知高效量化框架，适用于目标检测。**（北京大学，东北大学）\n- **FPGA 上神经网络的可重构卷积核。**（卡塞尔大学）\n- **基于银行均衡稀疏性的 FPGA 上高效高性能稀疏 LSTM 实现。**（哈尔滨工业大学，微软，清华大学，北京航空航天大学）\n- **Cloud-DNN：一个开源框架，用于将 DNN 模型映射至云端 FPGA。**（先进数字科学中心，UIUC）\n- F5-HD：基于 FPGA 的快速灵活超维计算（Hyperdimensional Computing）刷新框架。（加州大学圣地亚哥分校）\n- Xilinx 自适应计算加速平台：Versal 架构。（Xilinx）\n\n### 2020 ISSCC\n- **一款能效 3.4 至 13.3TOPS\u002FW、峰值算力 3.6TOPS 的双核深度学习加速器，适用于 7nm 5G 智能手机 SoC 中的多样化 AI 应用。**（联发科）\n- **一款 12nm 可编程卷积高效神经处理单元（Neural-Processing-Unit）芯片，峰值算力达 825TOPS。**（阿里巴巴）\n- **STATICA：一款 512 自旋、0.25M 权重的全数字退火处理器，采用近内存“全自旋同步更新”架构，支持组合优化问题中的完整自旋-自旋交互。**（东京工业大学，北海道大学，东京大学）\n- **GANPU：一款能效达 135TFLOPS\u002FW 的多 DNN 训练处理器，专为生成对抗网络（GANs）设计，利用推测性双稀疏性挖掘技术。**（韩国科学技术院）\n- **一款功耗仅 510nW、工作电压 0.41V 的低内存低计算量关键词识别芯片，采用串行 FFT 的 MFCC 与二值化深度可分离卷积神经网络，基于 28nm CMOS 工艺。**（东南大学，EPFL，哥伦比亚大学）\n- **一款 65nm、每帧功耗 24.7μJ、平均功耗 12.3mW 的激活相似性感知卷积神经网络视频处理器，采用混合精度、帧间数据复用及混合位宽差分帧数据编解码技术。**（清华大学）\n- **一款 65nm 存内计算（Computing-in-Memory）CNN 处理器，系统能效达 2.9 至 35.8TOPS\u002FW，采用动态稀疏性性能缩放架构及高效的宏内\u002F宏间数据复用机制。**（清华大学，台湾国立清华大学）\n- 一款 28nm、64Kb、支持推理与训练的双向转置多比特 6T SRAM 存内计算宏单元，专为 AI 边缘芯片设计。（台湾国立清华大学）\n- 一款 7nm FinFET CMOS 工艺下的存内计算 SRAM 宏单元，算力达 351TOPS\u002FW 和 372.4GOPS，适用于机器学习应用。（台积电）\n- 一款 22nm、2Mb ReRAM 存内计算宏单元，支持多比特 MAC 运算，能效范围 121–28TOPS\u002FW，适用于微型 AI 边缘设备。（台湾国立清华大学）\n- 一款 28nm、64Kb、支持 8 位 MAC 运算的 6T SRAM 存内计算宏单元，专为 AI 边缘芯片设计。（台湾国立清华大学）\n- 一款每任务功耗仅 1.5μJ 的路径规划处理器，支持二维\u002F三维微机器人自主导航。（台湾国立清华大学）\n- 一款 65nm、能效 8.79TOPS\u002FW、功耗 23.82mW 的混合信号振荡器型 NeuroSLAM 加速器，适用于边缘机器人应用。（佐治亚理工学院）\n- CIM-Spin：一款电压范围 0.5 至 1.2V 的可扩展退火处理器，采用数字存内计算自旋算子与寄存器型自旋，用于求解组合优化问题。（台湾国立清华大学）\n- 一种面向二维 PE 阵列加速器的计算自适应弹性时钟链技术，具备动态时序增强能力。（西北大学）\n- 一款能效达 74 TMACS\u002FW 的 CMOS-RRAM 神经突触核心，支持动态可重构数据流与原位可转置权重，适用于概率图模型。（斯坦福大学，加州大学圣地亚哥分校，清华大学，圣母大学）\n- 一款完全集成的模拟 ReRAM 存内计算芯片，能效达 78.4TOPS\u002FW，支持全并行 MAC 运算。（清华大学，台湾国立清华大学）\n\n### 2020 HPCA\n- **基于神经元到内存转换的深度学习加速。**（UCSD）\n- **HyGCN：采用混合架构的图卷积网络（GCN）加速器。**（ICT, UCSB）\n- **SIGMA：面向深度神经网络（DNN）训练的稀疏不规则 GEMM 加速器，配备灵活互连结构。**（Georgia Tech）\n- **PREMA：面向可抢占式神经处理单元（NPUs）的预测性多任务调度算法。**（KAIST）\n- **ALRESCHA：轻量级可重构稀疏计算加速器。**（Georgia Tech）\n- **SpArch：面向稀疏矩阵乘法的高效架构。**（MIT, NVIDIA）\n- **A3：通过近似方法加速神经网络中的注意力机制。**（SNU）\n- **AccPar：面向异构深度学习加速器阵列的张量分区方法。**（Duke, USC）\n- **PIXEL：光子神经网络加速器。**（Ohio, George Washington）\n- **Facebook 基于 DNN 的个性化推荐系统的架构影响分析。**（Facebook）\n- **通过基于存内计算（PIM）的架构设计实现高效胶囊网络处理。**（Houston）\n- **只见树木不见森林：边缘数据中端到端 AI 应用性能研究。**（UT Austin, Intel）\n- **卷积加速器中的通信下界分析。**（ICT, THU）\n- **Fulcrum：面向灵活实用原位加速器的简化控制与访问机制。**（Virginia, UCSB, Micron）\n- **EFLOPS：面向高性能分布式训练平台的算法与系统协同设计。**（阿里巴巴）\n- **ML 驱动设计的实践经验：片上网络（NoC）案例研究。**（AMD）\n- **Tensaurus：支持混合稀疏-稠密张量计算的通用加速器。**（Cornell, Intel）\n- **面向归纳矩阵算法的脉动-数据流混合架构。**（UCLA）\n- 面向架构探索的深度强化学习框架：无路由器 NoC 案例研究。（USC, OSU）\n- QuickNN：面向 3D 点云的 k-d 树最近邻搜索的内存与性能优化。（Umich, General Motors）\n- 轨道边缘计算：太空中的机器推理。（CMU）\n- 面向自动机处理的可扩展高效存内互连架构。（Virginia）\n- 降低移动设备连接待机能耗的技术。（ETHZ, Cyprus, CMU）\n\n### 2020 ASPLOS\n- **Shredder：通过学习噪声分布保护推理隐私。**（UCSD）\n- **DNNGuard：弹性异构 DNN 加速器架构，抵御对抗攻击。**（CAS, USC）\n- **Interstellar：利用 Halide 调度语言分析 DNN 加速器。**（Stanford, THU）\n- **DeepSniffer：基于学习架构提示的 DNN 模型提取框架。**（UCSB）\n- **Prague：高性能、感知异构的异步去中心化训练系统。**（USC）\n- **PatDNN：通过模式化权重剪枝实现在移动设备上的实时 DNN 执行。**（College of William and Mary, Northeastern, USC）\n- **Capuchin：面向深度学习的基于张量的 GPU 内存管理。**（HUST, MSRA, USC）\n- **NeuMMU：为神经处理单元（NPUs）提供高效地址转换的架构支持。**（KAIST）\n- **FlexTensor：面向异构系统的张量计算自动调度探索与优化框架。**（PKU）\n\n### 2020 DAC\n- **面向设备端增量学习系统的实用方法：选择性权重更新。**\n- **基于双向 SRAM 阵列的深度神经网络片上训练加速器。**\n- **最小化外围电路开销的存内神经网络计算的算法-硬件协同设计。**\n- **面向鲁棒深度学习推理的自适应浮点编码算法-硬件协同设计。**\n- **图神经网络的硬件加速。**\n- **利用数据流稀疏性实现高效的卷积神经网络训练。**\n- **使用计算存储设备进行低功耗深度神经网络训练加速。**\n- **基于预测置信度的低复杂度梯度计算以加速 DNN 训练。**\n- **SparseTrain：利用数据流稀疏性实现高效的卷积神经网络训练。**\n- **SCA：支持训练与推理的安全 CNN 加速器。**\n- **STC：面向外部内存访问减少的重要度感知变换编码框架。**\n\n### 2020 FPGA\n- **AutoDNNchip：通过编译、优化与探索自动生成 DNN 芯片。**（Rice, UIUC）\n- **在 CPU-FPGA 异构平台上加速 GCN 训练。**（USC）\n- 使用 FPGA 大规模模拟绝热分岔以求解组合优化问题。（Central Florida）\n\n### 2020 ISCA\n- **IBM POWER9 和 z15 处理器上的数据压缩加速器。**（IBM）\n- **集成于 x86 SoC 的高性能深度学习协处理器，搭配服务器级 CPU。**（Centaur）\n- **Think Fast：用于加速深度学习工作负载的张量流处理器（TSP）。**（Groq）\n- **MLPerf 推理：机器学习推理系统的基准测试方法论。**\n- **多神经网络加速架构。**（SNU）\n- **SmartExchange：以低成本计算换取高成本内存存储\u002F访问。**（Rice, TAMU, UCSB）\n- **Centaur：面向个性化推荐的基于芯粒（Chiplet）的混合稀疏-稠密加速器。**（KAIST）\n- **DeepRecSys：优化端到端大规模神经推荐推理的系统。**（Facebook, Harvard）\n- **面向共享内存多处理器集合操作的网内加速架构。**（NVIDIA）\n- **DRQ：面向深度神经网络加速的动态区域量化方法。**（SJTU）\n- IBM z15 高频大型机分支预测器。（ETHZ）\n- Déjà View：面向节能 360° VR 视频流的空间-时间计算复用。（Penn State）\n- uGEMM：面向 GEMM 应用的一元计算架构。（Wisconsin）\n- Gorgon：从关系数据加速机器学习。（Stanford）\n- RecNMP：通过近内存处理（Near-Memory Processing）加速个性化推荐。（Facebook）\n- JPEG-ACT：通过基于变换的有损压缩加速深度学习。（UBC）\n- 交换数据重排序：一种降低稀疏推理工作负载数据移动能耗的新技术。（Sandia, Rochester）\n- Echo：基于编译器的 LSTM RNN 训练 GPU 内存占用缩减。（Toronto, Intel）\n\n### 2020 HotChips\n- **Google 训练芯片揭秘：TPUv2 与 TPUv3。**（Google）\n- **首款晶圆级处理器（及未来）的软件协同设计（Software Co-design）。** (Cerebras)\n- **Manticore：面向超高能效浮点计算的 4096 核 RISC-V 小芯片架构。** (ETHZ)\n- **百度昆仑 – 面向多样化工作负载的人工智能处理器。** (Baidu)\n- **含光 800 NPU – 数据中心终极 AI 推理解决方案。** (Alibaba)\n- **用于人工智能加速的硅光子学（Silicon Photonics）。** (Lightmatter)\n- 玄铁-910：基于 RISC-V 创新云与边缘计算。（Alibaba）\n- ARM Cortex-M55 与 Ethos-U55 技术概览：ARM 最强大的终端 AI 处理器。（ARM）\n- PGMA：面向无监督学习的可扩展贝叶斯推断加速器。（Harvard）\n\n### 2020 VLSI\n- **PNPU：采用随机粗细粒度剪枝与自适应输入\u002F输出\u002F权重跳过的 146.52TOPS\u002FW 深度神经网络学习处理器。** (KAIST)\n- **3.0 TFLOPS、0.62V 可扩展处理器核心，实现高计算利用率的 AI 训练与推理。** (IBM)\n- **基于 10nm FinFET CMOS 工艺的 617 TOPS\u002FW 全数字二值神经网络加速器。** (Intel)\n- **支持多码长、超低延迟（7.8–13.6 pJ\u002Fb）的可重构神经网络辅助极化码解码器。** (NTU)\n- **面向沉浸式可穿戴设备手部姿态估计的 4.45ms 低延迟 3D 点云神经网络处理器。** (KAIST)\n- **基于 16nm 工艺、采用并行吉布斯采样（Gibbs Sampling）的 3mm² 可编程贝叶斯推断加速器，适用于无监督机器感知。** (Harvard)\n- 1.03pW\u002Fb 超低漏电堆叠电压 SRAM，适用于智能边缘处理器。（Umich）\n- Z-PIM：支持全精度可变权重的高能效稀疏感知存内计算（Processing-In-Memory）架构。（KAIST）\n\n### 2020 MICRO\n- **SuperNPU：基于超导逻辑器件的极速神经处理单元。** (Kyushu University）\n- **印刷式机器学习分类器（Printed Machine Learning Classifiers）。** (UIUC, KIT）\n- **基于查找表（Look-Up Table）的高能效缓存支持神经网络加速处理架构。** (PSU, Intel)\n- **FReaC Cache：末级缓存中的折叠逻辑可重构计算架构。** (UIUC, IBM)\n- **Newton：DRAM 厂商推出的面向机器学习的存内加速器（Accelerator-in-Memory, AiM）架构。** (Purdue, SK Hynix)\n- **VR-DANN：通过解码器辅助神经网络加速实现实时视频识别。** (SJTU)\n- **Procrustes：面向稀疏深度神经网络训练的数据流与加速器架构。** (University of British Columbia, Microsoft)\n- **Duplo：针对 GPU Tensor Core 优化深度神经网络冗余内存访问。** (Yonsei University, EcoCloud, EPFL)\n- **DUET：基于双模块架构提升深度神经网络效率。** (UCSB, Alibaba)\n- **ConfuciuX：使用强化学习为 DNN 加速器自主分配硬件资源。** (GaTech)\n- **Planaria：通过动态架构分裂实现空间多租户深度神经网络加速。** (UCSD, Bigstream, Kansas, NVIDIA, Google)\n- **TFE：基于迁移滤波器的高能效引擎，用于压缩与加速卷积神经网络。** (THU, Alibaba)\n- **MatRaptor：基于行乘积的稀疏-稀疏矩阵乘法加速器。** (Cornell)\n- **TensorDash：利用稀疏性加速深度神经网络训练。** (Toronto)\n- **SAVE：面向 CPU 上 DNN 训练与推理的稀疏感知向量引擎。** (UIUC)\n- **GOBO：量化基于注意力机制的 NLP 模型，实现低延迟与高能效推理。** (Toronto)\n- **TrainBox：通过系统性平衡操作构建的超大规模神经网络训练服务器架构。** (SNU)\n- **AWB-GCN：具备运行时负载再平衡能力的图卷积网络加速器。** (Boston et al.)\n- **Mesorasi：通过延迟聚合提供点云分析的架构支持。** (Rochestor, ARM)\n- **NCPU：面向资源受限低功耗设备的嵌入式神经 CPU 架构，实现端到端实时性能。** (Northwestern University)\n- FlexWatts：面向高能效微处理器的功率与负载感知混合供电网络。（ETHZ, Intel, Technion, NTU）\n- AutoScale：使用强化学习优化边缘随机推理的能效。（Facebook）\n- CATCAM：具备可扩展存内架构的常数时间可变三元内容寻址存储器（Ternary CAM）。（THU, Southeast University）\n- DUAL：使用数字存内计算（Digital-Based Processing In-Memory）加速聚类算法。（UCSD）\n- Bit-Exact ECC Recovery (BEER)：通过利用 DRAM 数据保持特性确定片上 ECC 函数。（ETHZ）\n\n### 2020 ICCAD\n- ReTransformer：基于 ReRAM 的存内计算架构，用于 Transformer 加速。（Duke）\n- 高能效无 XNOR 存内二值神经网络（BNN）加速器，结合输入分布正则化。（POSTECH）\n- HyperTune：面向异构系统高效分布式 DNN 训练的动态超参数调优。（UCI, NGD）\n- SynergicLearning：基于神经网络的特征提取方法，实现高精度超维学习（Hyperdimensional Learning）。（USC）\n- 优化随机计算以实现卷积神经网络低延迟推理。（南京大学）\n- HAPI：硬件感知渐进推理（Hardware-Aware Progressive Inference）。（Samsung）\n- MobiLattice：配备混合数字\u002F模拟非易失性存内计算模块的深度可分离 DCNN 加速器。（PKU, Duke）\n- 面向片上深度强化学习的多核加速器设计。（ICT）\n- DRAMA：面向高性能与高能效深度训练系统的近似 DRAM 架构。（庆熙大学, NUS）\n- 基于 FPGA、配备高带宽内存的现代 CNN 低批量训练加速器。（ASU, Intel）\n\n### 2021 ISSCC\n- **A100 数据中心 GPU 与 Ampere 架构。**（NVIDIA）\n- **昆仑：面向多样化工作负载的 14nm 高性能 AI 处理器。**（百度）\n- **一款 12nm 自动驾驶处理器，支持 60.4TOPS 算力、13.8TOPS\u002FW 的 CNN 执行，并采用任务分离的 ASIL D（汽车安全完整性等级 D）控制机制。**（瑞萨电子）\n- **BioAIP：一种可重构生物医学 AI 处理器，支持自适应学习，适用于多种智能健康监测场景。**（电子科技大学）\n- **一款 0.2 至 3.6TOPS\u002FW 可编程卷积成像 SoC，在传感器内通过电流域三值加权 MAC 运算实现特征提取与感兴趣区域检测。**（鲁汶大学）\n- **一款 7nm 四核 AI 芯片，支持 25.6TFLOPS 混合 FP8 训练、102.4TOPS INT4 推理及工作负载感知的动态调频。**（IBM）\n- **一款 28nm 12.1TOPS\u002FW 双模式 CNN 处理器，采用基于有效权重的卷积与基于误差补偿的预测方法。**（清华大学）\n- **一款 40nm 4.81TFLOPS\u002FW 8 位浮点训练处理器，用于非稀疏神经网络，采用共享指数偏置与 24 路融合乘加树结构。**（首尔国立大学）\n- **PIU：一款 248GOPS\u002FW 基于流式处理的处理器，用于不规则概率推理网络，采用精度可扩展的 Posit 算术（一种新型数值表示法），工艺为 28nm。**（鲁汶大学）\n- **一款集成于 5nm 旗舰移动 SoC 中的 6K-MAC 特征图稀疏感知神经处理单元。**（三星）\n- **一款 1\u002F2.3 英寸 1230 万像素背照式堆叠 CMOS 图像传感器，内置 4.97TOPS\u002FW CNN 处理器。**（索尼）\n- **一款 184μW 实时手势识别系统，采用混合微型分类器，适用于智能可穿戴设备。**（南洋理工大学）\n- **一款 25mm² 的物联网 SoC，通过贝叶斯语音降噪与基于注意力机制的序列到序列 DNN 语音识别，在 16nm FinFET 工艺下实现 18ms 抗噪语音转文本延迟。**（哈佛、塔夫茨、ARM、康奈尔）\n- **一款 109nW 背景噪声与工艺偏差容忍的声学特征提取器，基于脉冲域除法能量归一化，适用于常开关键词检测设备。**（哥伦比亚大学）\n- 一款 148nW 通用事件驱动智能唤醒芯片，用于 AIoT 设备，采用异步脉冲型特征提取器与卷积神经网络。（北京大学）\n- 一款基于可扩展存内计算（Computing-in-Memory）的可编程神经网络推理加速器。（普林斯顿大学）\n- 一款 2.75 至 75.9TOPS\u002FW 存内计算神经网络处理器，支持集合关联块级零值跳过与乒乓存内计算，同时支持计算与权重更新。（清华大学）\n- 一款 65nm 基于 3T 动态模拟 RAM 的存内计算宏单元与 CNN 加速器，具备保留增强、自适应模拟稀疏性与 44TOPS\u002FW 系统能效。（西北大学）\n- 一款 5.99 至 691.1TOPS\u002FW 张量列（Tensor-Train）存内计算处理器，采用比特级稀疏优化与可变精度量化。（清华大学、电子科技大学、台湾清华大学）\n- 一款 22nm 4Mb 8 位精度 ReRAM（阻变存储器）存内计算宏单元，能效达 11.91 至 195.7TOPS\u002FW，适用于微型 AI 边缘设备。（台湾清华大学、台积电）\n- eDRAM-CIM：一种基于可重构嵌入式动态存储器阵列的存内计算设计，实现自适应数据转换器与电荷域计算。（德州大学奥斯汀分校、英特尔）\n- 一款 28nm 384kb 6T-SRAM 存内计算宏单元，支持 8 位精度，适用于 AI 边缘芯片。（台湾清华大学、工业技术研究院、台积电）\n- 一款全数字 SRAM 型存内计算宏单元，能效达 89TOPS\u002FW，密度达 16.3TOPS\u002Fmm²，精度为全精度，工艺为 22nm，适用于机器学习边缘应用。（台积电）\n- 一款 20nm 6GB 功能内存 DRAM，基于 HBM2，配备 1.2TFLOPS 可编程计算单元，利用 Bank 级并行性，适用于机器学习应用。（三星）\n- 一款 21×21 动态精度位串行计算图加速器，用于通过有限差分法求解偏微分方程。（南洋理工大学）\n\n### 2021 ASPLOS\n- **利用 Gustavson 算法加速稀疏矩阵乘法。**（MIT、NVIDIA）\n- **SIMDRAM：一种利用 DRAM 实现位串行 SIMD 处理的框架。**（苏黎世联邦理工学院、卡内基梅隆大学）\n- **RecSSD：基于固态硬盘的推荐推理近数据处理方案。**（哈佛、Facebook、亚利桑那州立大学）\n- DiAG：一种受数据流启发的通用处理器架构。（伊利诺伊大学厄巴纳-香槟分校）\n- 场景可配置多分辨率推理：重新思考量化。（哈佛、富兰克林与马歇尔学院）\n- 防御性近似：利用近似计算保护 CNN 安全。（斯法克斯大学等）\n\n### 2021 HPCA\n- **跨领域加速的计算栈。**（加州大学圣地亚哥分校等）\n- **面向多 DNN 工作负载的异构数据流加速器。**（佐治亚理工、Facebook、NVIDIA）\n- **SPAGHETTI：面向 FPGA 的高稀疏 GEMM 流式加速器。**（西蒙菲莎大学等）\n- **SpAtten：高效的稀疏注意力架构，采用级联 Token 与头剪枝。**（MIT）\n- **Mix and Match：一种以 FPGA 为中心的深度神经网络量化框架。**（东北大学等）\n- **Tensor Casting：为个性化推荐训练协同设计算法与架构。**（韩国科学技术院）\n- **GradPIM：一种实用的 DRAM 内梯度下降处理架构。**（首尔国立大学、延世大学）\n- **SpaceA：在存内计算加速器上执行稀疏矩阵向量乘法。**（加州大学圣塔芭芭拉分校、北京大学）\n- **Layerweaver：通过逐层调度最大化神经处理单元资源利用率。**（成均馆大学、首尔国立大学）\n- **深度学习中异构内存系统的高效张量迁移与分配。**（马德里康普顿斯大学、微软）\n- **CSCNN：使用中心对称滤波器的 CNN 加速器算法-硬件协同设计。**（乔治华盛顿大学、俄亥俄大学）\n- **Adapt-NoC：面向异构众核架构的灵活片上网络设计。**（乔治华盛顿大学）\n- **GCNAX：灵活且高能效的图卷积神经网络加速器。**（乔治华盛顿大学、俄亥俄大学）\n- **Ascend：面向无处不在的深度神经网络计算的可扩展统一架构。**（华为）\n- **大规模深度学习推荐模型训练效率分析。**（Facebook）\n- **Eudoxus：刻画并加速自主机器中的定位任务。**（罗切斯特大学等）\n- **NeuroMeter：面向机器学习加速器的集成功耗、面积与时序建模框架。**（加州大学圣塔芭芭拉分校、谷歌）\n- **追逐碳足迹：计算环境影响的难以捉摸之处。**（哈佛、Facebook）\n- **FuseKNA：基于融合核卷积的深度神经网络加速器。**（清华大学）\n- **FAFNIR：通过高效近内存智能规约加速稀疏收集操作。**（佐治亚理工）\n- **VIA：面向向量单元的智能暂存器，应用于稀疏矩阵计算。**（巴塞罗那超级计算中心等）\n- Cheetah：优化并加速同态加密以实现隐私推理。（纽约大学、首尔国立大学、哈佛、Facebook）\n- CAPE：内容可寻址处理引擎。（康奈尔大学、宾州州立大学）\n- Prodigy：通过硬件-软件协同设计改善数据间接不规则工作负载的内存延迟。（密歇根大学等）\n- BRIM：双稳态电阻耦合 Ising 机。（罗切斯特大学）\n- 一种用于求解线性系统的模拟预处理器。（桑迪亚国家实验室等）\n\n### 2021 ISCA\n- 三代演进塑造 Google TPUv4i 的十大经验（Google）\n- 面向三星旗舰移动 SoC 的稀疏感知与可重构 NPU 架构（Samsung）\n- AI 融合的 POWER10 处理器能效提升（IBM）\n- 基于商用 DRAM 技术的 PIM（Processing-in-Memory，存内计算）硬件架构与软件栈（Samsung）\n- AMD EPYC™ 与 Ryzen™ 处理器家族的 Chiplet（小芯片）技术与设计先驱（AMD）\n- RaPiD：超低精度训练与推理的 AI 加速器（IBM）\n- REDUCT：就近计算，保持低温！——在多核 CPU 上通过近缓存计算扩展 DNN 推理（Intel）\n- 分布式深度学习中的通信算法-架构协同设计（UCSB, TAMU）\n- ABC-DIMM：通过 DIMM 间广播缓解基于 DIMM 的近内存处理通信瓶颈（清华大学）\n- Sieve：面向大规模并行 k-mer 匹配的可扩展原位 DRAM 加速器设计（弗吉尼亚大学）\n- FORMS：基于 ReRAM（阻变存储器）的细粒度极化原位混合信号 DNN 加速器（东北大学等）\n- BOSS：面向存储级内存（Storage-Class Memory）的带宽优化搜索加速器（首尔国立大学）\n- 利用枚举基数树加速基因组序列比对的种子生成（密歇根大学）\n- Aurochs：面向数据流线程的架构（斯坦福大学）\n- PipeZK：通过流水线架构加速零知识证明（北京大学等）\n- CODIC：支持定制 DRAM 内功能与优化的低成本基底（苏黎世联邦理工学院）\n- 在分布式深度学习训练平台中实现计算与通信重叠（佐治亚理工学院）\n- CoSA：面向空间加速器的约束优化调度方法（伯克利大学）\n- η-LSTM：通过挖掘内存节省与架构设计机会协同设计高效大型 LSTM 训练（华盛顿大学等）\n- FlexMiner：面向图模式挖掘的模式感知加速器（麻省理工学院）\n- PolyGraph：揭示图处理加速器灵活性的价值（加州大学洛杉矶分校）\n- 带数千并发缺失缓存的大规模 FPGA 图处理（洛桑联邦理工学院）\n- SPACE：面向个性化推荐的异构内存局部性感知处理（延世大学）\n- ELSA：神经网络中高效轻量自注意力机制的软硬件协同设计（首尔国立大学）\n- Cambricon-Q：面向高效训练的混合架构（中科院计算所）\n- TENET：基于关系中心记法的张量数据流建模框架（北京大学等）\n- NASGuard：面向鲁棒神经架构搜索（NAS）网络的新型加速器架构（中科院计算所）\n- NASA：使用 NAS 处理器加速神经网络设计（中科院计算所）\n- Albireo：通过硅光子学实现卷积神经网络的高能效加速（俄亥俄州立大学等）\n- QUAC-TRNG：利用商用 DRAM 芯片四行激活实现高吞吐真随机数生成（苏黎世联邦理工学院）\n- NN-Baton：面向多芯片加速器的 DNN 工作负载编排与芯粒粒度探索（清华大学）\n- SNAFU：超低功耗、能量最小化的 CGRA（粗粒度可重构阵列）生成框架与架构（卡内基梅隆大学）\n- SARA：扩展可重构数据流加速器（斯坦福大学）\n- HASCO：面向张量计算的敏捷硬件与软件协同设计（北京大学等）\n- SpZip：为不规则应用提供高效数据压缩的架构支持（麻省理工学院）\n- 双侧稀疏 Tensor Core（微软）\n- RingCNN：利用代数稀疏环张量实现节能的基于 CNN 的计算成像（台湾清华大学）\n- GoSPA：一种高能效高性能全局优化的稀疏卷积神经网络加速器（罗格斯大学）\n\n### 2021 VLSI\n- MN-Core —— 深度学习的高效可扩展方案（Preferred Networks）\n- CHIMERA：配备 2MB 片上代工厂阻变存储器（ReRAM），用于高效训练与推理的 0.92 TOPS、2.2 TOPS\u002FW 边缘 AI 加速器（斯坦福大学、台积电）\n- OmniDRL：采用双模式权重压缩与片上稀疏权重转置器的 29.3 TFLOPS\u002FW 深度强化学习处理器（KAIST）\n- DepFiN：面向高分辨率图像处理的 12nm、3.8TOPs 深度优先 CNN 处理器（鲁汶大学）\n- PNNPU：基于块状点处理实现常规 DRAM 访问的 11.9 TOPS\u002FW 高速 3D 点云神经网络处理器（KAIST）\n- 一款 28nm、276.55TFLOPS\u002FW 的稀疏深度神经网络训练处理器，采用隐式冗余推测与批归一化重构（清华大学）\n- 使用异构计算架构与指数计算存内（Exponent-Computing-in-Memory）的 13.7 TFLOPS\u002FW 浮点 DNN 处理器（KAIST）\n- PIMCA：28nm 工艺下 3.4Mb 可编程存内计算加速器，用于片上 DNN 推理（亚利桑那州立大学）\n- 采用输入相似性优化与基于注意力的输出推测上下文中断机制的 6.54 至 26.03 TOPS\u002FW 存内计算 RNN 处理器（清华大学、台湾清华大学）\n- 采用电容式混合信号计算、支持 5 位输入的全行列并行存内计算 SRAM 宏单元（普林斯顿大学）\n- HERMES Core —— 14nm CMOS 与 PCM（相变存储器）基础的存内计算核心，采用 300ps\u002FLSB 线性化 CCO ADC 阵列与本地数字处理（IBM）\n- 20x28 自旋混合存内退火计算机，采用电压模式模拟自旋算子求解组合优化问题（台湾大学、UCSB）\n- 基于 FeFET（铁电场效应晶体管）1T1R 阵列的模拟存内计算，适用于边缘 AI 应用（索尼）\n- 面向源跟随器与电荷共享电压传感的 HZO FeFET 局部乘法 & 全局累加阵列，实现高能效可靠存内计算（东京大学）\n\n### 2021 ICCAD\n- Bit-Transformer：将位级稀疏性转化为 ReRAM 加速器更高性能（上海交通大学）\n- 面向图卷积网络的基于交叉开关的存内计算加速器架构（宾州州立大学、IBM）\n- REREC：面向个性化推荐的 ReRAM 内加速访问感知映射（杜克大学、清华大学）\n- 面向 ReRAM 加速器的多任务 BERT 执行高效面积框架（KAIST）\n- 面向设备端任务自适应 DNN 训练的收敛监测方法（KAIST）\n\n### 2021 HotChips\n- 在 Esperanto 的 ET-SoC-1 芯片上集成上千个 RISC-V\u002FTensor 处理器加速机器学习推荐（Esperanto Technologies）\n- 燧原科技的 AI 计算芯片（燧原科技）\n- Qualcomm Cloud AI 100：12 TOPs\u002FW 可扩展、高性能、低延迟深度学习推理加速器（高通）\n- Graphcore Colossus Mk2 IPU（Graphcore）\n- 百万核级、多晶圆 AI 集群（Cerebras）\n- SambaNova SN10 RDU：通过数据流加速 Software 2.0（SambaNova）\n\n### 2021 MICRO\n- RACER：使用阻性存储器（Resistive Memory）实现位级流水线处理（CMU, UIUC）\n- AutoFL：支持异构感知的节能联邦学习（Federated Learning）框架（Soongsil, ASU）\n- DarKnight：基于可信硬件（Trusted Hardware）加速隐私与完整性保护的深度学习框架（USC）\n- 2-in-1 加速器：通过随机精度切换同时赢得对抗鲁棒性（Adversarial Robustness）与效率优势（Rice）\n- F1：用于全同态加密（Fully Homomorphic Encryption）的快速可编程加速器（MIT, Umich）\n- Equinox：在定制推理加速器上“免费”训练模型（EPFL）\n- PointAcc：高效的点云（Point Cloud）加速器（MIT）\n- Noema：面向神经群体模式检测的硬件高效模板匹配引擎（Toronto, NeuroTek）\n- SquiggleFilter：便携式病毒检测加速器（Umich）\n- EdgeBERT：面向延迟敏感型多任务 NLP 推理的句子级能耗优化方案（Harvard 等）\n- HiMA：面向可微分神经计算机（Differentiable Neural Computer）的快速可扩展历史内存访问引擎（Umich）\n- FPRaker：用于加速神经网络训练的处理单元（Toronto）\n- RecPipe：联合优化推荐质量与性能的模型与硬件协同设计框架（Harvard, Facebook）\n- Shift-BNN：通过内存友好型模式检索实现高效的概率贝叶斯神经网络（Bayesian Neural Network）训练（Houston 等）\n- 提炼通用深度学习加速中的位级稀疏并行性（ICT, UESTC）\n- Sanger：基于可重构架构（Reconfigurable Architecture）实现稀疏注意力机制的协同设计框架（PKU）\n- ESCALATE：通过核分解（Kernel Decomposition）提升稀疏 CNN 加速器效率（Duke, USC）\n- SparseAdapt：在可重构加速器上运行时控制稀疏线性代数运算（Umich 等）\n- Capstan：面向稀疏性的向量 RDA（Stanford, SambaNova）\n- I-GCN：通过“岛屿化”（Islandization）增强运行时局部性的图卷积网络（Graph Convolutional Network）加速器（PNNL 等）\n\n### 2021 DAC\n- MAT：面向长序列注意力机制的存内计算（Processing In-Memory）加速\n- PIM-Quantifier：用于 mRNA 定量分析的存内计算平台\n- 基于中介层（Interposer）的片上网络设计，支持敏捷神经网络处理器芯片定制\n- GCiM：面向图构建的近数据处理（Near-Data Processing）加速器\n- 面向边缘-云端视频流的智能视频处理架构\n- Gemmini：通过全栈集成实现系统性深度学习架构评估\n- PixelSieve：从压缩视频流中实现高效活动分析\n- TensorLib：面向张量代数的空域加速器生成框架\n- 基于小芯片（Chiplet）架构与光互连技术扩展深度学习推理能力\n- Dancing along Battery：在移动设备上通过运行时可重构性支持 Transformer 模型\n- 设计一款包含 2048 个小芯片、14336 核心的晶圆级处理器\n- 利用存内计算加速全同态加密\n\n### 2022 ISSCC\n- 支持 AI 边缘设备相似向量匹配操作的 512Gb 存算一体（In-Memory-Computing）3D NAND 闪存\n- 基于 1ynm 工艺、1.25V 电压、8Gb 容量、16Gb\u002Fs\u002Fpin 的 GDDR6 存内加速器，支持 1TFLOPS MAC 运算及多种激活函数，适用于深度学习应用\n- 22nm 工艺、4Mb 容量的 STT-MRAM 数据加密近内存计算宏单元，具备 192GB\u002Fs 读取与解密带宽，在 8 位 MAC 操作下能效达 25.1–55.1 TOPS\u002FW，适用于 AI 相关运算\n- 40nm 工艺、2M 单元、8 位精度的混合 SLC-MLC PCM 存算一体宏单元，能效达 20.5–65.0 TOPS\u002FW，适用于微型 AI 边缘设备\n- 8Mb 无直流电流的二进制转 8 位精度 ReRAM 非易失性存算一体宏单元，采用时空读出技术，在 AI 边缘设备上实现 1286.4 TOPS\u002FW 至 21.6 TOPS\u002FW 能效\n- 单模 6T CMOS SRAM 宏单元，配备无保持器负载外设与行分离动态体偏置，实现 2.53fW\u002Fbit 漏电流，适用于 AIoT 传感平台\n- 5nm 工艺、254 TOPS\u002FW 与 221 TOPS\u002Fmm² 的全数字存算一体宏单元，支持宽范围动态电压频率调节（DVFS）及同步 MAC 与写入操作\n- 28nm 工艺、1.041Mb\u002Fmm² 密度、27.38TOPS\u002FW 能效的有符号 INT8 动态逻辑 ADC-less SRAM 存算一体宏单元，支持可重构位运算，适用于 AI 与嵌入式应用\n- 28nm 工艺、1Mb 容量、时域 6T SRAM 存算一体宏单元，延迟 6.6ns，8 位 MAC 操作下实现 1241 GOPS 与 37.01 TOPS\u002FW，适用于 AI 边缘设备\n- 多模 8K-MAC 硬件利用率感知神经处理单元，采用统一多精度数据通路，集成于 4nm 旗舰移动 SoC\n- 65nm 工艺阵列式神经 CPU 处理器，结合深度学习与通用计算，PE 利用率达 95%，高数据局部性，端到端性能增强\n- COMB-MCM：面向可扩展多小芯片模块（Multi-Chiplet-Module）边缘机器学习的边界存算（Computing-on-Memory-Boundary）神经网络处理器，采用双极性质位稀疏优化\n- Hiddenite：4K-PE 隐藏网络推理 4D 张量引擎，利用片上模型构建，在 CIFAR-100 与 ImageNet 上实现 34.8 至 16.0 TOPS\u002FW 能效\n- 28nm 工艺、29.2TFLOPS\u002FW BF16 与 36.5TOPS\u002FW INT8 可重构数字存算一体处理器，具备统一浮点\u002F整数流水线与位级存内 Booth 乘法，适用于云端深度学习加速\n- DIANA：端到端高能效的数字与模拟混合神经网络 SoC\n- ARCHON：332.7TOPS\u002FW、5 位精度、抗工艺偏差的模拟 CNN 处理器，配备模拟神经元计算单元与模拟存储器\n- 面向边缘 AI 实时视频分析的模拟矩阵处理器\n- 0.8V 智能视觉传感器，内置微型卷积神经网络与可编程权重，采用混合模式感内计算（Processing-in-Sensor）技术实现图像分类\n- 184QPS\u002FW、64Mb\u002Fmm² 密度、3D 逻辑-DRAM 混合键合结构，配备近内存处理引擎，适用于推荐系统\n- 28nm 工艺、27.5TOPS\u002FW 能效的近似计算型 Transformer 处理器，具备渐进稀疏预测与乱序计算能力\n- 28nm 工艺、15.59μJ\u002FToken 全数字位线转置存算一体稀疏 Transformer 加速器，支持流水线\u002F并行可重构模式\n- ReckOn：28nm、亚平方毫米面积、任务无关的脉冲递归神经网络（Spiking Recurrent Neural Network）处理器，支持秒级时间尺度的片上学习\n\n### 2022 HPCA\n- LISA：基于图神经网络（Graph Neural Network, GNN）的空间加速器可移植映射方案  \n- 模块化芯粒（Chiplet）系统中实现无死锁的向上数据包弹出机制  \n- FAST：使用随机舍入（Stochastic Rounding）的变精度块浮点（Block Floating Point）DNN 训练方法  \n- TransPIM：通过软硬件协同设计在内存中加速 Transformer 的架构  \n- 多个 DNN 在多个加速器核心上的映射优化框架  \n- ScalaGraph：面向大规模并行图处理的可扩展加速器  \n- PIMCloud：在具备内存内计算（Processing-in-Memory, PIM）能力的云环境中，支持服务质量（QoS）感知的延迟敏感型应用资源管理  \n- ANNA：专为近似最近邻搜索（Approximate Nearest Neighbor Search）设计的专用架构  \n- 利用缓存一致性解耦内存系统实现高效的大规模深度学习训练  \n- NeuroSync：基于安全高效推测执行的可扩展高精度脑模拟系统  \n- 在专为贝叶斯神经网络设计的 PIM 中实现高质量不确定性量化  \n- Griffin：重新思考深度学习架构中的稀疏优化问题  \n- CANDLES：面向低能耗稀疏神经网络加速的通道感知（Channel-Aware）新型数据流-微架构协同设计  \n- SPACX：基于硅光子学（Silicon Photonics）的可扩展芯粒加速器，用于 DNN 推理  \n- RM-SSD：面向大规模推荐推理的存储内计算（In-Storage Computing）方案  \n- CAMA：在内容可寻址存储器（Content-Addressable Memories）中实现能效与内存效率兼顾的自动机处理  \n- TNPU：为神经处理单元（Neural Processing Unit）提供无需树结构完整性保护的可信执行支持  \n- S2TA：利用结构化稀疏性实现移动设备上 CNN 加速的能效优化  \n- 利用基于交叉开关（Crossbar）的内存内计算架构加速图卷积网络（Graph Convolutional Networks）  \n- 基于原子数据流（Atomic Dataflow）的图级工作负载编排，用于可扩展 DNN 加速器  \n- SecNDP：在不可信内存环境下实现安全的近数据处理（Near-Data Processing）  \n- 面向储备池计算（Reservoir Computing）的稀疏矩阵乘法器直接空间实现  \n- Hercules：面向超大规模个性化推荐的异构感知推理服务系统  \n- ReGNN：消除冗余计算的图神经网络加速器  \n- 并行时间批处理（Parallel Time Batching）：稀疏脉冲神经计算的脉动阵列加速  \n- GCoD：通过专用算法与加速器协同设计加速图卷积网络  \n- CoopMC：面向马尔可夫链蒙特卡洛（Markov Chain Monte Carlo）加速器的算法-架构协同优化  \n- 异构芯粒系统的应用定义片上网络：实现视角分析  \n- Anton 3 上的专用高性能网络  \n- DarkGates：缓解高性能处理器中“暗硅”（Dark-Silicon）负面影响的混合电源门控架构  \n\n### 2022 ASPLOS\n- DOTA：检测并省略弱注意力机制，实现可扩展的 Transformer 加速  \n- 面向领域优化深度学习加速器的全栈搜索技术  \n- FINGERS：在图挖掘加速器中挖掘细粒度并行性  \n- BiSon-e：面向边缘设备窄整数线性代数计算的轻量级高性能加速器  \n- RecShard：基于统计特征的工业级神经推荐内存优化方案  \n- AStitch：在现代 SIMT 架构上为内存密集型机器学习训练与推理开启全新的多维优化空间  \n- NASPipe：通过因果同步并行机制实现高性能、可复现的流水线并行超网训练  \n- VELTAIR：通过自适应编译与调度实现高性能多租户深度学习服务  \n- 打破分布式机器学习工作负载中的计算与通信抽象壁垒  \n- GenStore：面向基因组序列分析的存储内处理系统  \n- ProSE：蛋白质发现引擎的架构与设计  \n- REVAMP：面向异构 CGRA（Coarse-Grained Reconfigurable Architecture）实现的系统化框架  \n- Invisible Bits：在 SRAM 模拟域中隐藏秘密信息  \n\n### 2022 ISCA\n- TDGraph：拓扑驱动的高性能流式图处理加速器  \n- DIMMining：在基于 DIMM 的近内存计算架构上实现高效剪枝与并行图挖掘  \n- NDMiner：利用近数据处理（Near Data Processing）加速图模式挖掘  \n- SmartSAGE：使用存储内处理架构训练大规模图神经网络  \n- 面向大规模分布式图神经网络的超大规模 FPGA 即服务（FPGA-As-A-Service）架构  \n- Crescent：驯服内存不规则性以加速深度点云分析  \n- Mozart：面向 AI 及更广泛应用场景的重用暴露数据流处理器  \n- 面向快速可扩展深度学习推荐模型训练的软硬件协同设计  \n- IBM Telum 处理器上的 AI 加速器  \n- 大规模深度推荐模型的数据存储与摄入机制解析  \n- 级联结构化剪枝：提升稀疏 DNN 加速器的数据复用率  \n- 在加速稀疏训练中预测并消除冗余计算  \n- SIMD^2：超越 GEMM 的通用矩阵指令集，用于加速张量计算  \n- 软件定义的张量流式多处理器，面向大规模机器学习  \n- 面向分布式 DL 模型训练的网络带宽感知集体调度策略  \n- 通过多芯片架构提升伊辛机（Ising Machine）容量  \n- 从（GPU）零开始训练个性化推荐系统：向前看而非向后看  \n- AMOS：通过硬件抽象实现在空间加速器上自动映射张量计算  \n- Mokey：为开箱即用的浮点 Transformer 模型启用窄位宽定点推理  \n- 通过基于梯度的运行时学习剪枝加速注意力机制  \n\n### 2022 HotChips\n- Groq 软件定义的横向扩展张量流式多处理器  \n- Boqueria：具备 1,456 个 RISC-V 核心、2 PetaFLOPs 算力、30 TeraFLOPs\u002FW 能效的内存内推理加速设备  \n- DOJO：特斯拉 Exa-Scale 计算机的微架构  \n- DOJO —— 面向机器学习训练的超级计算系统扩展方案  \n- Cerebras 架构深度解析：首次揭秘深度学习的硬件\u002F软件协同设计\n\n### 2022 MICRO\n- Cambricon-P：支持任意精度计算的位流架构（Bitflow Architecture）\n- OverGen：通过领域特定覆盖层生成提升 FPGA 易用性\n- big.VLITTLE：面向片上移动系统的按需数据并行加速\n- 将点云压缩推向边缘设备\n- ROG：面向机器人物联网的高性能、高鲁棒性分布式训练系统\n- 面向自主无人机的领域特定 SoC 自动化设计\n- GCD2：用于将 DNN 映射到移动 DSP 的全局优化编译器\n- Skipper：通过激活检查点与时间跳跃实现高效的脉冲神经网络（SNN）训练\n- 深入 Winograd 卷积：面向 4x4 数据块的逐抽头量化，实现高效推理\n- HARMONY：面向联邦学习系统的异构感知分层管理架构\n- 可适配蝴蝶加速器：通过硬件与算法协同设计加速基于注意力机制的神经网络\n- DFX：低延迟多 FPGA 设备，用于加速基于 Transformer 的文本生成\n- GenPIP：通过碱基识别与读段映射的紧密集成，在内存中加速基因组分析\n- BEACON：支持 CXL 的近内存池可扩展近数据处理加速器，用于基因组分析\n- ICE：基于 3D NAND 内存计算的智能认知引擎，用于向量相似性搜索加速\n- 稀疏注意力加速：结合内存内剪枝与片上重计算的协同优化方法\n- FracDRAM：在商用 DRAM 中实现分数值存储\n- pLUTo：通过查找表实现在 DRAM 内的大规模并行计算\n- 多层内存内处理\n- Flash-Cosmos：利用 NAND 闪存固有计算能力实现闪存内按位运算\n- 通过芯粒架构（Chiplet Architectures）扩展超导量子计算机\n- 基于敏捷方法开发高性能 RISC-V 处理器\n- 以数据为中心的高性能超图处理加速器\n- DPU-v2：面向不规则有向无环图的能效执行架构\n- 3D-FPIM：基于 3D NAND 闪存原位存内计算单元的极致能效 DNN 加速系统\n- DeepBurning-SEG：生成具备段粒度流水线架构的 DNN 加速器\n- ANT：利用自适应数值数据类型实现低位宽深度神经网络量化\n- Sparseloop：一种稀疏张量加速器建模的分析方法\n- Ristretto：面向 CNN 中稀疏压缩流的原子化处理架构\n\n### 2023 ISSCC\n- MetaVRain：133mW 实时超真实 3D-NeRF 处理器，配备 1D-2D 混合神经引擎，适用于移动端元宇宙应用\n- 22nm 832KB 混合域浮点 SRAM 存内计算宏单元，能效达 16.2–70.2 TFLOPS\u002FW，适用于高精度 AI 边缘设备\n- 28nm 64KB 数字域浮点计算单元与双比特 6T-SRAM 存内计算宏单元，能效达 31.6 TFLOPS\u002FW，适用于浮点 CNN\n- 28nm 38–102 TOPS\u002FW 8 位无乘法近似数字 SRAM 存内计算宏单元，用于神经网络推理\n- 4nm 基于 SRAM 的数字存内计算宏单元，支持位宽灵活性与 MAC 和权重更新同步操作，能效达 6163 TOPS\u002FW\u002Fb，密度达 4790 TOPS\u002Fmm²\u002Fb\n- 28nm 基于水平权重移位与垂直特征移位的分离字线 6T-SRAM 存内计算宏单元，适用于边缘端深度神经网络\n- 70.85–86.27 TOPS\u002FW PVT 不敏感 8 位字级模拟存内计算（ACIM），支持后处理松弛\n- CV-CIM：28nm 基于 XOR 的相似性感知存内计算宏单元，用于代价体构建\n- 22nm Delta-Sigma 存内计算（ΔΣCIM）SRAM 宏单元，输出均值接近零且采用 LSB 优先 ADC，能效达 21.38 TOPS\u002FW，适用于 8 位 MAC 边缘 AI 处理\n- CTLE-Ising：基于连续时间锁存器的 1440 自旋伊辛机，支持单次全并行自旋更新与自旋态均衡\n- 7nm 机器学习训练处理器，采用波形时钟分布\n- 1mW 常开式计算机视觉深度学习神经决策处理器\n- MulTCIM：28nm 2.24μJ\u002FToken 注意力令牌位混合稀疏数字存内计算加速器，适用于多模态 Transformer\n- 28nm 53.8 TOPS\u002FW 8 位稀疏 Transformer 加速器，配备内存内蝴蝶零跳过器，支持非结构化剪枝神经网络及基于存内计算的局部注意力复用引擎\n- 28nm 16.9–300 TOPS\u002FW 存内计算处理器，支持浮点神经网络推理\u002F训练，采用密集存内计算稀疏数字架构\n- TensorCIM：28nm 3.7nJ\u002FGather 与 8.3 TFLOPS\u002FW FP32 数字存内计算张量处理器，适用于 MCM-CIM 架构下的超越神经网络加速\n- DynaPlasia：基于 eDRAM 存内计算的可重构空间加速器，配备三模式单元，支持动态资源切换\n- 非易失性 AI 边缘处理器，配备 4MB SLC-MLC 混合模式 ReRAM 存内计算宏单元，能效达 51.4–251 TOPS\u002FW\n- 40–310 TOPS\u002FW 基于 SRAM 的全数字最高 4 位存内计算多瓦片神经网络加速器，采用 FD-SOI 18nm 工艺，适用于深度学习边缘应用\n- 12.4 TOPS\u002FW @ 136 GOPS AI-IoT 片上系统，集成 16 个 RISC-V 核心、2 至 8 位精度可扩展 DNN 加速器及 30% 性能提升的自适应体偏置技术\n- 28nm 2D\u002F3D 统一稀疏卷积加速器，配备块级邻域搜索器，适用于大规模体素化点云网络\n- 127.8 TOPS\u002FW 任意量化 1 至 8 位精度可扩展通用深度学习加速器，减少存储、逻辑与延迟浪费\n- 28nm 11.2 TOPS\u002FW 硬件利用率感知神经网络加速器，支持动态数据流\n- C-DNN：24.5–85.8 TOPS\u002FW 互补深度神经网络处理器，采用异构 CNN\u002FSNN 核心架构及基于前向梯度的稀疏性生成\n- ANP-I：28nm 1.5pJ\u002FSOP 异步脉冲神经网络处理器，支持亚 0.1μJ\u002F样本片上学习，适用于边缘 AI 应用\n- DL-VOPU：能效导向的领域专用深度学习视觉对象处理单元，支持多尺度语义特征提取，适用于移动端目标检测\u002F跟踪\n- 0.81mm² 740μW 实时语音增强处理器，采用无乘法器 PE 阵列，适用于 28nm CMOS 助听器\n- 12nm 18.1 TFLOPs\u002FW 稀疏 Transformer 处理器，配备基于熵的早退机制、混合精度预测与细粒度功耗管理\n\n### 2023 HPCA\n- SGCN：在深度图卷积网络加速器中利用压缩稀疏特征（Compressed-Sparse Features）\n- PhotoFourier：基于光子联合变换相关器（Photonic Joint Transform Correlator）的神经网络加速器\n- INCA：以输入驻留数据流（Input-stationary Dataflow）重新思考深度学习加速器设计\n- GROW：面向内存高效图卷积神经网络的行驻留稀疏-稠密矩阵乘法（Sparse-Dense GEMM）加速器\n- 深度学习训练中逻辑\u002F物理拓扑感知的集体通信机制\n- Sibia：支持切片级稀疏性利用的密集 DNN 加速有符号位切片架构（Signed Bit-slice Architecture）\n- Baryon：结合压缩与子块划分的高效混合内存管理方案\n- iCACHE：基于重要性采样（Importance-Sampling）的缓存机制，用于加速 I\u002FO 密集型 DNN 模型训练\n- HIRAC：面向 DNN 应用中稀疏矩阵乘法（SpGEMMs）的分层加速器，采用基于排序的打包策略\n- VEGETA：在 CPU 上实现稀疏\u002F稠密矩阵乘法分块加速的垂直集成扩展架构\n- ViTCoD：通过专用算法与加速器协同设计加速视觉 Transformer（Vision Transformer）\n- 利用领域信息实现深度学习加速器的高效自动化设计\n- DIMM-Link：为近内存计算（Near-Memory Processing）启用高效的 DIMM 间通信\n- Post0-VR：通过挖掘架构相似性与数据共享，实现现代 VR 的通用真实感渲染\n- ParallelNN：基于并行八叉树（Octree）的 3D 点云最近邻搜索加速器\n- ViTALiTy：结合低秩与稀疏近似，使用线性泰勒注意力（Linear Taylor Attention）统一加速视觉 Transformer\n- CTA：压缩令牌注意力机制（Compressed Token Attention Mechanism）的软硬件协同设计\n- HeatViT：面向视觉 Transformer 的硬件高效自适应令牌剪枝方案\n- GraNDe：面向图卷积网络的近数据处理架构，支持自适应矩阵映射\n- DeFiNES：通过解析建模快速探索 DNN 加速器的深度优先调度空间\n- CEGMA：面向图匹配网络的协调弹性图匹配加速方案\n- ISOSceles：通过层间流水线加速稀疏 CNN\n- OptimStore：利用片上处理（On-Die Processing）优化大规模 DNN 的存储内计算\n- MERCURY：通过挖掘输入相似性加速 DNN 训练\n- Dalorex：面向内存受限应用的数据本地化程序执行与架构\n- eNODE：面向神经常微分方程（Neural ODEs）的能效高、低延迟边缘推理与训练方案\n- MoCA：面向多租户深度神经网络的以内存为中心的自适应执行方案\n- Mix-GEMM：面向边缘设备混合精度量化 DNN 推理的高效软硬件架构\n- FlowGNN：面向实时、工作负载无关的图神经网络推理的数据流架构\n- Chimera：面向计算密集型算子融合的有效分析优化框架\n- Securator：快速且安全的神经处理单元（Neural Processing Unit）\n\n### 2023 ASPLOS\n- 在大型深度学习模型中通过分解实现通信与依赖计算的重叠\n- Heron：自动约束的高性能库生成工具，专为深度学习加速器设计\n- TelaMalloc：面向生产级机器学习加速器的高效片上内存分配器\n- EVStore：为深度推荐系统中嵌入表（Embedding Tables）扩展提供存储与缓存能力\n- WACO：学习工作负载感知的稀疏张量程序格式与调度协同优化\n- GRACE：基于可扩展图结构的方法，加速推荐模型推理\n- 将超大规模脉冲神经元网络映射至神经形态硬件\n- HuffDuff：从稀疏加速器中窃取已剪枝的 DNN 模型\n- ABNDP：在近数据处理中协同优化数据访问与负载均衡\n- Infinity Stream：可移植且对程序员友好的内存内\u002F近内存融合方案\n- Flexagon：面向高效 DNN 处理的多数据流稀疏-稀疏矩阵乘法加速器\n- 通过动态反射分块（Dynamic Reflexive Tiling）加速稀疏数据编排\n- SPADA：利用自适应数据流加速稀疏矩阵乘法\n- SparseTIR：面向深度学习稀疏编译的可组合抽象\n- Hidet：面向深度学习张量程序的任务映射编程范式\n- 稀疏抽象机（The Sparse Abstract Machine）\n- Homunculus：为数据中心网络自动生成高效数据平面机器学习流水线\n- TensorIR：自动张量化程序优化的抽象接口\n- FLAT：优化数据流以缓解注意力瓶颈\n- TLP：基于深度学习的张量程序调优成本模型\n- Betty：通过批次级图划分（Batch-Level Graph Partitioning）支持大规模 GNN 训练\n- 面向分布式训练的透明传输网络内聚合\n- Optimus-CC：结合三维并行与通信压缩，高效训练大型 NLP 模型\n- DPACS：通过算法-架构协同设计实现动态神经网络剪枝的硬件加速\n- Lucid：非侵入式、可扩展且可解释的深度学习训练任务调度器\n- ElasticFlow：面向分布式深度学习的弹性无服务器训练平台\n- 超大规模硬件优化的神经架构搜索（Neural Architecture Search）\n- MP-Rec：硬件-软件协同设计，支持多路径推荐系统\n\n### 2023 ISCA\n- OliVe：通过硬件友好的“异常值-受害者对”量化（Outlier-Victim Pair Quantization）加速大语言模型\n- FACT：采用积极相关性预测的 FFN-Attention 协同优化 Transformer 架构（FFN-Attention Co-optimized Transformer Architecture）\n- Mystique：支持精确且可扩展生成生产级 AI 基准测试的系统\n- 通过跨层级近内存处理加速个性化推荐\n- 理解并缓解深度学习训练系统中的硬件故障\n- LAORAM：面向大规模嵌入表训练的前瞻型 ORAM 架构（Look Ahead ORAM Architecture）\n- 面向大规模推荐系统的 CPU 性能优化\n- SPADE：支持 SpMM 和 SDDMM 的灵活可扩展加速器\n- MESA：用于空间架构生成的微架构扩展（Microarchitecture Extensions for Spatial Architecture Generation）\n- FDMAX：用于求解偏微分方程的弹性加速器架构\n- RSQP：面向加速凸二次优化的问题定制化架构设计\n- ECSSD：面向极端分类任务的硬件\u002F数据布局协同设计的存内计算架构（In-Storage-Computing Architecture）\n- SAC：面向多芯片 GPU 的共享感知缓存机制（Sharing-Aware Caching）\n- SCALO：面向可扩展脑机接口的富加速器分布式系统\n- ETTE：基于张量列（Tensor-Train）的高效深度神经网络计算引擎\n- TaskFusion：面向多任务自然语言处理的高效迁移学习架构，具备双增量稀疏性（Dual Delta Sparsity）\n- 面向分块加速器的层间调度空间定义与探索\n- ArchGym：用于机器学习辅助架构设计的开源仿真平台（Open-Source Gymnasium）\n- V10：硬件辅助的 NPU 多租户机制，提升资源利用率与公平性\n- RAELLA：无需重训练！重塑算术运算以实现高效、低分辨率、低损耗的模拟存内计算（Analog PIM）\n- MapZero：结合强化学习与蒙特卡洛树搜索（Monte-Carlo Tree Search），为粗粒度可重构架构（Coarse-grained Reconfigurable Architectures）进行映射优化\n- TPU v4：支持光学重构与嵌入硬件加速的机器学习超级计算机\n- AMD 超算征程的研究回顾\n- MTIA：第一代专为 Meta 推荐系统设计的硅芯片\n- 利用共享微指数（Shared Microexponents），少量移位即可大幅提升性能\n\n### 2023 MICRO\n- AuRORA：面向多租户工作负载的虚拟化加速器编排系统\n- UNICO：统一的软硬件协同优化框架，用于鲁棒神经网络加速\n- Spatula：面向稀疏矩阵分解的硬件加速器\n- Eureka：面向单侧非结构化稀疏 DNN 推理的高效张量核心\n- RM-STC：受行合并数据流启发的 GPU 稀疏张量核心，实现节能稀疏加速\n- Sparse-DySta：面向稀疏多 DNN 工作负载的稀疏感知动态与静态调度\n- MAICC：轻量级多核架构，结合缓存内计算（In-Cache Computing），支持多 DNN 并行推理\n- SRIM：面向一元计算（Unary Computing）的脉动式随机递增存储架构\n- 通过交错梯度顺序提升 NPU 片上内存在 DNN 训练中的数据复用效率\n- TT-GNN：通过嵌入重构与硬件优化实现高效的片上图神经网络训练\n- Si-Kintsugi：修复有缺陷的多核空间架构，恢复类黄金性能，适用于 AI 应用\n- Bucket Getter：基于桶的处理引擎，支持低比特块浮点数（Block Floating Point, BFP）DNN\n- ADA-GP：通过自适应梯度预测加速 DNN 训练\n- HighLight：利用层次化结构稀疏性实现高效灵活的 DNN 加速\n- 利用复数固有特性加速复数值神经网络\n- 通过挖掘几何局部性加速点云处理\n- HARP：基于硬件的伪分块技术，用于稀疏矩阵乘法加速器\n- TeAAL：用于建模稀疏张量加速器的声明式框架\n- TileFlow：基于树分析的融合数据流建模框架\n\n### 2023 HotChips\n- SK 海力士领域专用内存（Domain-Specific Memory）驱动的以内存为中心的计算\n- 三星 AI 集群系统：结合 HBM-PIM 与基于 CXL 的近内存处理，面向 Transformer 架构的大语言模型\n- 具备光学可重构互连与嵌入支持的机器学习超级计算机\n- 深入剖析 Cerebras 晶圆级集群\n- IBM NorthPole 神经推理机\n- Moffett Antoum：面向视觉与大语言模型的深度稀疏 AI 推理片上系统（System-on-Chip）\n- Qualcomm® Hexagon™ NPU\n\n### 2024 ISSCC\n- ATOMUS：面向延迟敏感型应用的 5nm 32TFLOPS\u002F128TOPS 机器学习片上系统（System-on-Chip, SoC）\n- AMD MI300 模块化小芯片（Chiplet）平台——面向百亿亿次级（Exa-Class）系统的高性能计算与 AI 加速器\n- 采用面对面晶圆键合（Face-to-Face Wafer-Bonded）7nm 逻辑工艺、间距小于 10μm 的三维集成原型片上系统，用于增强现实应用，在相同面积下实现最高 40% 能耗降低\n- Metis AIPU：面向边缘端低成本、高能效推理的 12nm 15TOPS\u002FW 209.6TOPS 片上系统\n- IBM NorthPole：基于 12nm 芯片的神经网络推理架构\n- NVE：面向智能设备高分辨率视觉质量增强的 3nm 23.2TOPS\u002FW 12 位数字存内计算（Computing-in-Memory, CIM）神经引擎\n- 一款 28nm 74.34TFLOPS\u002FW BF16 异构存内计算加速器，利用去噪相似性优化扩散模型\n- 一款 14nm 异构嵌入式 MPU 中支持 16× 性能可调剪枝的 23.9TOPS\u002FW @ 0.8V、130TOPS AI 加速器，适用于实时机器人应用\n- 一款 28nm 物理计算单元（Physics Computing Unit），支持新兴物理信息神经网络（Physics-Informed Neural Network）与有限元法（Finite Element Method），用于边缘设备上的实时科学计算\n- C-Transformer：面向大语言模型的 2.6 至 18.1μJ\u002FToken 同构 DNN-Transformer\u002F脉冲 Transformer 处理器，具备大小网络架构与隐式权重生成机制\n- LSPU：全集成实时 LiDAR-SLAM 片上系统，配备点神经网络分割与多级 kNN 加速\n- NeuGPU：采用分段哈希架构的 18.5mJ\u002FIter 神经图形处理单元（Neural-Graphics Processing Unit），支持即时建模与实时渲染\n- Space-Mate：面向移动空间计算的 303.5mW 实时稀疏专家混合（Sparse Mixture-of-Experts）NeRF-SLAM 处理器\n- 一款 28nm 83.23TFLOPS\u002FW 基于 POSIT 格式的存内计算宏单元，适用于高精度 AI 应用\n- 一款 16nm 96Kb 整数-浮点双模式增益单元存内计算宏单元，能效达 73.3–163.3TOPS\u002FW 和 33.2–91.2TFLOPS\u002FW，适用于 AI 边缘设备\n- 一款 22nm 64kb 类闪电混合存内计算宏单元，配备压缩加法树与模拟存储量化器，适用于 Transformer 与 CNN\n- 一款 3nm 全数字存内计算宏单元，支持 INT12 x INT12 并行 MAC 架构与代工厂 6T SRAM 存储单元，性能达 32.5 TOPS\u002FW、55.0 TOPS\u002Fmm²、3.78 Mb\u002Fmm²\n- 一款 818–4094 TOPS\u002FW 电容重构存内计算宏单元，统一加速 CNN 与 Transformer\n- 一款 28nm 72.12-TFLOPS\u002FW 基于外积的混合域浮点 SRAM 存内计算宏单元，配备对数位宽折叠 ADC\n- 一款 28nm 2.4Mb\u002Fmm²、6.9–16.3 TOPS\u002Fmm² 基于 eDRAM-LUT 的数字存内计算宏单元，支持内存内编码与刷新\n- 一款 22nm 16Mb 浮点 ReRAM 存内计算宏单元，能效达 31.2 TFLOPS\u002FW，适用于 AI 边缘设备\n- 一款 Flash-SRAM-ADC 融合的“塑性”存内计算宏单元，支持标准 14nm FinFET 工艺下的神经网络学习\n\n### 2024 HPCA\n- 面向 GPU 的带存储级内存（Storage-Class Memory）的高带宽效率 DRAM 缓存\n- Gemini：面向大规模 DNN 小芯片加速器的映射与架构协同探索\n- STELLAR：基于时空计算的能效高、低延迟脉冲神经网络（Spiking Neural Network, SNN）算法与硬件协同设计\n- MIMDRAM：端到端基于 DRAM 的处理系统，支持高吞吐、高能效、程序员透明的多指令多数据（Multiple-Instruction Multiple-Data, MIMD）计算\n- 通过解析商用 PIM 技术探索未来存内计算（Processing-in-Memory, PIM）架构路径\n- 在真实 DRAM 芯片中实现功能完备布尔逻辑：实验表征与分析\n- StreamPIM：在赛道存储器（Racetrack Memory）中进行流式矩阵计算\n- SmartDIMM：内存内上层协议加速\n- BeaconGNN：基于异步存储内计算的大规模图神经网络（Graph Neural Network, GNN）加速\n- Smart-Infinity：在真实系统上使用近存储处理（Near-Storage Processing）加速大型语言模型训练\n- FlashGNN：面向 GNN 训练的 SSD 内加速器\n- DockerSSD：面向计算型 SSD 的容器化存储内处理与硬件加速\n- SPADE：面向自动驾驶的基于稀疏柱体的 3D 目标检测加速器\n- Rapper：面向区块链存储平台的参数感知内存内修复加速器\n- MOPED：支持灵活维度的高效运动规划引擎\n- TALCO：利用回溯指针收敛的基因组序列比对分块方法\n- ECO-CHIP：面向可持续超大规模集成电路（VLSI）的小芯片架构碳足迹估算\n- Lightening-Transformer：动态操作、光互连的光子 Transformer 加速器\n- SACHI：一种稳态感知、全数字、近缓存的 Ising 架构\n- BitWave：利用列级比特稀疏性加速深度学习\n- LUTein：基于 Radix-4 LUT 的密集-稀疏位切片架构与切片张量处理单元\n- FIGNA：基于整数单元的 FP-INT GEMM 加速器设计，在保持数值精度前提下提升性能\n- ASADI：利用对角线原位计算加速稀疏注意力\n- 基于 LPDDR 的 CXL-PNM 平台，实现总拥有成本（TCO）优化的 GPT 推理\n- HotTiles：利用异构加速器架构加速稀疏矩阵乘法（SpMM）\n- SPARK：通过高效编码实现可扩展、精度感知的神经网络加速\n- 数据移动加速：跨域多加速器链式协作\n- RELIEF：通过数据移动感知的加速器调度缓解 SoC 内存压力\n\n### 2024 ASPLOS\n- SpecInfer：基于树结构的推测推理与验证，加速大语言模型服务\n- ExeGPT：面向大语言模型推理的约束感知资源调度\n- Proteus：支持精度缩放的高吞吐推理服务系统\n- SpotServe：在可抢占实例上部署生成式大语言模型\n- MAGIS：通过协同图变换与调度实现深度神经网络的内存优化\n- 面向边缘加速器的 8 位 Transformer 推理与微调\n- Cocco：面向内存容量-通信协同优化的硬件映射联合探索\n- Atalanta：一位比特价值“千”个张量值\n- Harp：利用准序列特性加速长读段的序列到图映射\n- GSCore：通过架构支持 3D 高斯点绘（Gaussian Splatting）实现高效辐射场渲染\n- BeeZip：迈向有组织且可扩展的数据压缩架构\n- ACES：通过自适应执行流与并发感知缓存优化加速稀疏矩阵乘法\n- Explainable-DSE：基于瓶颈分析的敏捷、可解释深度学习加速器软硬件协同设计探索\n- AttAcc! 利用存内计算（PIM, Processing-In-Memory）释放批处理 Transformer 生成模型推理潜力\n- SpecPIM：通过架构-数据流协同探索，在支持 PIM 的系统上加速推测推理\n- PIM-DL：通过算法-系统协同优化拓展商用 DRAM-PIM 在深度学习中的适用性\n- NeuPIMs：面向批处理大语言模型推理的 NPU-PIM 异构加速\n- FEASTA：面向机器学习稀疏张量代数的灵活高效加速器\n- CMC：通过编解码器辅助矩阵压缩加速视频 Transformer\n- Tandem Processor：应对神经网络中新兴算子的处理器\n- Carat：为无乘法器 GEMM 解锁值级并行性\n- ORIANNA：面向基于优化的机器人应用的加速器生成框架\n- SmartMem：通过布局转换消除与适配，提升移动端 DNN 执行效率\n- Dr. DNA：利用神经元激活分布对抗深度学习中的静默数据损坏\n- RECom：通过编译方法加速含海量嵌入列的推荐模型推理\n- NDPipe：利用近数据处理（Near-data Processing）实现照片存储中的可扩展推理与持续训练\n- Fractal：联合多层级稀疏模式调优以平衡 DNN 剪枝的精度与性能\n- 通过即时微内核聚合优化动态形状神经网络在加速器上的执行\n- DTC-SpMM：利用 Tensor Core 加速通用稀疏矩阵乘法的桥梁技术\n- SoD2：静态优化动态深度神经网络执行\n- BVAP：面向带限界重复正则表达式的节能高效自动机处理\n- IANUS：基于 NPU-PIM 统一内存系统的集成加速器\n- PIM-STM：面向存内计算系统的软件事务内存\n- CIM-MLC：面向存算一体（Computing-In-Memory）加速器的多层级编译栈\n\n### 2024 HotChips\n- NVIDIA Blackwell GPU：推动生成式 AI 与加速计算发展\n- SambaNova SN40L RDU：突破万亿参数以上规模生成式 AI 计算壁垒\n- 微小架构优化对大规模生成式 AI 系统产生巨大影响\n- AMD Instinct MI300X 生成式 AI 加速器与平台架构\n- 采用光学连接的 AI 计算 ASIC，助力下一代横向扩展架构\n- RNGD：张量收缩处理器\n- 下一代 AMD Versal AI Edge 系列，面向视觉与汽车应用\n- Onyx：面向稀疏张量代数的可编程加速器\n- 下一代 MTIA — Meta 的推荐推理加速器\n\n### 2024 ISCA\n- ReAIM：基于 ReRAM 的自适应 Ising 机，用于求解组合优化问题\n- Splitwise：利用阶段拆分实现高效生成式大语言模型推理\n- 注意鸿沟：张量算法可达的数据移动与运算强度边界\n- 支持数据重排序的可重构加速器，实现低成本片上数据流切换\n- 晶圆级网络交换机\n- PID-Comm：面向商用 DIMM 内处理（Processing-in-DIMMs）的快速灵活集合通信框架\n- PreSto：面向推荐模型训练的存储内数据预处理系统\n- pSyncPIM：面向全 Bank PIM 架构的稀疏矩阵操作部分同步执行\n- NDSearch：通过近数据处理加速基于图遍历的近似最近邻搜索\n- 利用近 CXL 内存处理实现高效大规模推荐模型训练\n- 在 3D 混合键合架构上挖掘新兴 AI 模型的相似性机遇\n- NDPBridge：在近 DRAM Bank 处理架构中实现跨 Bank 协同\n- UM-PIM：基于 DRAM 的统一共享内存空间 PIM\n- MegIS：利用存储内处理实现高性能低成本宏基因组分析\n- 非易失性存内计算（PiM）的纠错机制\n- MAD Max 超越单节点：在分布式系统上加速大规模机器学习模型\n- Cambricon-D：扩散模型的全网络差分加速\n- Flagger：面向大规模跨孤岛联邦学习聚合的协同加速\n- Trapezoid：适用于稠密与稀疏矩阵乘法的多功能加速器\n- NeuraChip：基于哈希解耦空间架构加速 GNN 计算\n- Soter：面向空间加速器的解析式张量架构建模与自动张量程序调优\n- ALISA：通过稀疏感知 KV 缓存加速大语言模型推理\n- Pre-gated MoE：面向快速可扩展专家混合（Mixture-of-Expert）推理的算法-系统协同设计\n- MECLA：通过子矩阵分区缩放实现内存-计算高效的大语言模型加速器\n- Tender：通过张量分解与运行时重量化加速大语言模型\n- 推荐系统训练的异构加速流水线\n- LLMCompass：为大语言模型推理实现高效硬件设计\n\n### 2024 MICRO\n- CamPU：面向深度学习的 3D 空间计算系统的多摄像头处理单元（Multi-Camera Processing Unit）\n- AdapTiV：基于符号相似性的图像自适应 Token 合并方法，用于视觉 Transformer 加速\n- Fusion-3D：即时 3D 重建与实时渲染的集成加速方案\n- 内存系统基准测试、仿真与应用性能分析的混乱现状\n- Stellar：面向密集型与稀疏型空间加速器的自动化设计框架\n- LUCIE：支持即插即用集成的通用芯粒-中介层（Chiplet-Interposer）设计框架\n- SRender：通过感知敏感度的动态精度渲染提升神经辐射场效率\n- EMP：通过原语化（Primitivization）实现高效的 4 位矩阵运算单元\n- BBS：双向比特级稀疏性加速深度学习\n- SCAR：在异构多芯粒模块加速器上调度多模型 AI 工作负载\n- SCALE：面向消息传递图神经网络的结构中心加速器\n- 在 CXL 内存扩展器中实现低开销的通用近数据处理（Near-Data Processing）\n- PIFS-Rec：基于 Fabric 交换机内处理的大规模推荐系统推理架构\n- PIM-MMU：用于加速商用存内计算（Processing-in-Memory, PIM）系统数据传输的内存管理单元\n- Azul：利用分布式片上内存加速稀疏迭代求解器的加速器\n- FloatAP：在关联处理器（Associative Processors）中支持高性能浮点运算\n- COMPASS：基于 SRAM 的存内计算脉冲神经网络（SNN）加速器，具备自适应脉冲预测机制\n- SOFA：通过跨阶段协同分块实现计算-内存优化的稀疏性加速器\n- Leviathan：面向通用近数据计算的统一系统\n- TMiner：面向图模式挖掘的顶点级任务调度架构\n- PointCIM：面向深度点云分析的存内计算架构\n- SambaNova SN40L：通过数据流与专家组合突破 AI 内存墙限制\n- Duplex：支持专家混合（Mixture of Experts）、分组查询注意力（Grouped Query Attention）和连续批处理（Continuous Batching）的大语言模型设备\n- VGA：可扩展长序列模型推理的硬件加速器\n- FuseMax：利用扩展爱因斯坦求和（Extended Einsums）优化注意力加速器设计\n- FlashLLM：基于芯粒的闪存内计算架构，支持 70B 大语言模型的端侧推理\n- GauSPU：面向实时 SLAM 系统的 3D 高斯泼溅（Gaussian Splatting）处理器\n- PyPIM：从微架构设计到 Python 张量全面集成数字存内计算\n- 基于流的数据布局策略，支持扩展内存的近数据处理\n- FiboCIM：基于斐波那契编码电荷域 SRAM 的存内计算加速器，用于 DNN 推理\n- MeMCISA：面向数据库系统的忆阻器使能内存为中心指令集架构（Memristor-enabled Memory-Centric Instruction-Set Architecture）\n\n### 2025 ISSCC\n- 基于形状感知混合架构、早期计算跳过与高斯缓存调度器的 3D GS 处理器，功耗仅 1.78mJ\u002F帧，达 373fps\n- IRIS：一款 8.55mJ\u002F帧的空间计算 SoC，支持可交互渲染与表面感知建模，采用 3D 高斯泼溅技术\n- 16nm 工艺、216kb 容量、188.4TOPS\u002FW 与 133.5TFLOPS\u002FW 效能的微缩多模式增益单元存内计算宏，适用于边缘 AI 设备\n- 接近稀疏性理论极限、复合 AI 模型精度损失 \u003C2-30%、能效达 51.6TFLOPs\u002FW 的全数据路径存内计算宏\n- RNGD：5nm 工艺张量收缩处理器，为大语言模型提供高能效推理\n- 4nm 旗舰移动 SoC 中集成的生成式 AI 专用神经处理单元，采用扇出型晶圆级封装（Fan-Out Wafer-Level Package）\n- SambaNova SN40L：5nm 2.5D 数据流加速器，配备三级内存，支持万亿参数 AI 模型\n- T-REX：16nm FinFET 工艺下，每 Token 耗时 68~567μs、能耗 0.41~3.95μJ，通过减少外部内存访问与增强硬件利用率的 Transformer 加速器\n- 28nm 工艺、0.22μJ\u002FToken、感知计算强度的 CNN-Transformer 加速器，采用基于混合注意力的层融合与级联剪枝，用于语义分割\n- EdgeDiff：支持混合精度与重排序分组量化，每推理仅 418.4mJ 的多模态少步扩散模型加速器\n- Nebula：28nm 工艺、109.8TOPS\u002FW 的 3D 概率神经网络（PNN）加速器，具备自适应分区、多重跳过与块级聚合特性\n- MAE：3nm 工艺、0.168mm² 面积、576MAC 的微型自动编码器，采用行优先深度优先调度，面向边缘设备视觉生成 AI\n- MEGA.mini：采用新型大\u002F小核架构的通用生成式 AI 处理器，专为 NPU 设计\n- BROCA：52.4~559.2mW 功耗的移动社交代理 SoC，配备自适应比特截断单元与声学聚类比特分组\n- 88.36TOPS\u002FW 比特级权重压缩大语言模型加速器，支持簇对齐 INT-FP-GEMM 与二维工作流重构\n- Slim-Llama：支持二值\u002F三值权重、功耗仅 4.69mW 的十亿参数 Llama 模型大语言模型处理器\n- HuMoniX：57.3fps、12.8TFLOPS\u002FW 的文生动作处理器，具备跨迭代输出稀疏性与跨帧关节相似性\n- 22nm 工艺、60.81TFLOPS\u002FW 的扩散模型加速器，采用带宽感知内存分区与 BL 分段存内计算，高效支持多任务内容生成\n\n### 2025 HPCA\n- eDKM：面向大语言模型训练阶段的高效精确权重聚类方法\n- SoMA：识别、探索并理解面向 DNN 加速器的 DRAM 通信调度空间\n- LUT-DLA：以查找表（Lookup Table）为基础的极低比特深度学习加速器\n- BitMoD：比特串行混合数据类型大语言模型加速\n- FIGLUT：利用查找表实现 FP-INT GEMM 的高能效加速器设计\n- MANT：通过数学自适应数值类型实现大语言模型的高效低比特分组量化\n- 提升大规模 AI 训练效率：C4 方案实现实时异常检测与通信优化\n- 重新审视大规模机器学习研究集群中的可靠性问题\n- Anda：通过可变长度分组激活数据格式解锁高效大语言模型推理\n- LAD：利用局部性感知解码（Locality Aware Decoding）高效加速大语言模型生成式推理\n- VQ-LLM：面向向量量化增强大语言模型推理的高性能代码生成方案\n- InstAttention：存储内注意力卸载，实现低成本长上下文大语言模型推理\n- PIMnet：面向可扩展存内计算的领域专用高效集体通信网络\n- EIGEN：通过异构双层主动中介层网络（Network-on-Active-Interposer）实现高效 3DIC 互连\n- PAISE：面向 Transformer 大语言模型的存内计算加速推理调度引擎\n- FACIL：面向 SoC-PIM 协同端侧大语言模型推理的灵活 DRAM 地址映射方案\n- Lincoln：在消费级设备上实现实时 50~100B 大语言模型推理，采用 LPDDR 接口、支持计算的闪存\n- 让每个人都能负担得起大语言模型推理：利用 NDP-DIMM 扩展 GPU 内存\n\n### 2025 ASPLOS\n- DynaX：通过动态 X:M 细粒度结构化剪枝实现稀疏注意力加速（Sparse Attention Acceleration）\n- ReCA：面向实时高效协作具身自主智能体的集成加速系统（Integrated Acceleration for Real-Time and Efficient Cooperative Embodied Autonomous Agents）\n- Fast On-device LLM Inference with NPUs：利用神经处理单元（NPU）实现设备端大语言模型（LLM）快速推理\n- Helix：通过最大流算法在异构 GPU 与网络上部署大语言模型\n- FlexSP：通过灵活序列并行（Flexible Sequence Parallelism）加速大语言模型训练\n- Spindle：通过波前调度（Wavefront Scheduling）实现多任务大模型的高效分布式训练\n- Concerto：面向大规模深度学习的自动通信优化与调度系统\n- MoE-Lightning：在内存受限 GPU 上实现高吞吐量专家混合模型（MoE）推理\n- FSMoE：面向稀疏专家混合模型（Sparse Mixture-of-Experts Models）的灵活可扩展训练系统\n- CoServe：在有限内存下实现专家协作模型（Collaboration-of-Experts, CoE）的高效推理\n- Klotski：通过专家感知多批次流水线实现高效的专家混合模型推理\n- MoC-System：面向稀疏专家混合模型训练的高效容错系统\n- Accelerating LLM Serving for Multi-turn Dialogues with Efficient Resource Management：通过高效资源管理加速多轮对话场景下的大语言模型服务\n- COMET：迈向实用化的 W4A4KV4 大语言模型服务\n- Past-Future Scheduler for LLM Serving under SLA Guarantees：在服务等级协议（SLA）保障下的大语言模型服务“过去-未来”调度器\n- POD-Attention：通过解锁完整的预填充-解码重叠（Prefill-Decode Overlap）加速大语言模型推理\n- TAPAS：面向云平台大语言模型推理的热与功耗感知调度器\n- PAPI：基于存内计算（Processing-In-Memory）系统的动态并行性挖掘，加速大语言模型解码\n- Be CIM or Be Memory: A Dual-mode-aware DNN Compiler for CIM Accelerators：面向存内计算加速器的双模式感知深度神经网络编译器\n\n### 2025 ISCA\n- WSC-LLM：面向晶圆级芯片的大语言模型服务与架构协同探索\n- FRED：用于三维并行深度神经网络训练的晶圆级互连架构\n- PD Constraint-aware Physical\u002FLogical Topology Co-Design for Network on Wafer：面向晶圆上网络的功耗密度约束感知物理\u002F逻辑拓扑协同设计\n- H2-LLM：面向低批次大语言模型推理的异构混合键合硬件-数据流协同探索\n- HiPER：面向高效机器人学习控制的层次化组合式处理架构\n- Dadu-Corki：面向具身人工智能驱动机器人操作的算法-架构协同设计\n- SpecEE：通过推测性早退（Speculative Early Exiting）加速大语言模型推理\n- Oaken：通过在线-离线混合 KV 缓存量化实现快速高效的大语言模型服务\n- Chimera：面向大语言模型混合并行的通信融合技术\n- LUT Tensor Core：基于查找表（LUT）的低位宽大语言模型推理软硬件协同设计\n- AiF：利用闪存内处理（In-Flash Processing）加速设备端大语言模型推理\n- LIA：通过支持 AMX 的 CPU-GPU 协同计算与 CXL 卸载，在单 GPU 上加速大语言模型推理\n- Cramming a Data Center into One Cabinet: A Co-Exploration of Computing and Hardware Architecture of Waferscale Chip：将数据中心压缩进一个机柜：晶圆级芯片的计算与硬件架构协同探索\n- Ecco：通过熵感知缓存压缩提升大语言模型的内存带宽与容量\n- Hybe：支持百万 Token 上下文窗口的 GPU-NPU 混合系统，实现高效大语言模型推理\n- MeshSlice：面向分布式深度神经网络训练的高效二维张量并行\n- AIM：面向高性能存内计算（PIM）架构级 IR 压降缓解的软硬件协同设计\n- OptiPIM：使用整数线性规划优化存内计算加速\n- HeterRAG：面向检索增强生成（Retrieval-augmented Generation）的异构存内计算加速\n- ATiM：面向 DRAM 内处理的张量程序自动调优\n- Hybrid SLC-MLC RRAM Mixed-Signal Processing-in-Memory Architecture for Transformer Acceleration via Gradient Redistribution：通过梯度重分布实现 Transformer 加速的混合 SLC-MLC RRAM 模拟存内计算架构\n- MagiCache：虚拟缓存内计算引擎\n- AMALI：面向现代 GPU 上大语言模型推理的精确分析建模\n- MicroScopiQ：通过异常值感知微缩放量化加速基础模型\n- HYTE：通过静态-动态混合方法实现稀疏加速器的灵活分块\n- NUPEA：通过非均匀处理单元访问优化空间数据流架构上的关键负载\n- Meta's Second Generation AI Chip: Model-Chip Co-Design and Productionization Experiences：Meta 第二代 AI 芯片：模型-芯片协同设计与量产经验\n- Scaling Llama 3 Training with Efficient Parallelism Strategies：通过高效并行策略扩展 Llama 3 训练规模\n- Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures：DeepSeek-V3 洞察：AI 架构扩展挑战与硬件反思\n- BingoGCN：通过细粒度分区与 SLT 实现可扩展高效的图神经网络（GNN）加速\n\n### 2025 HotChips\n- Memory: (Almost) the Only Thing That Matters：内存：（几乎）唯一重要的事情\n- UB-Mesh：华为下一代 AI 超级计算机，采用统一总线互连与 nD 全网格架构\n- Corsair - An In-memory Computing Chiplet Architecture for Inference-time Compute Acceleration：Corsair —— 面向推理时计算加速的存内计算芯粒架构\n- NVIDIA's GB10 SoC: AI Supercomputer On Your Desk：NVIDIA GB10 SoC：桌面上的 AI 超级计算机\n- 4th Gen AMD CDNA™ Generative AI Architecture Powering AMD Instinct™ MI350 Series Accelerators and Platforms：第四代 AMD CDNA™ 生成式 AI 架构，驱动 AMD Instinct™ MI350 系列加速器与平台\n- Ironwood : Delivering best in class perf, perf\u002FTCO and perf\u002FWatt for reasoning model training and serving：Ironwood：为推理模型训练与服务提供业界领先的性能、性能\u002F总拥有成本比及性能\u002F瓦特比\n\n### 2025 MICRO\n- Stratum：面向高效 MoE（专家混合模型）服务的系统-硬件协同设计，采用分层单片 3D 堆叠 DRAM。（加州大学圣地亚哥分校、佐治亚理工学院、伊利诺伊大学厄巴纳-香槟分校、伊利诺伊理工学院）\n- Kelle：面向边缘计算中高效 LLM（大语言模型）服务的 KV 缓存与 eDRAM 协同设计。（纽约大学）\n- LongSight：通过稀疏注意力机制加速大上下文 LLM 的计算使能内存方案。（康奈尔大学）\n- ComPASS：面向处理器-PIM（存内计算）协作的兼容性 PIM 协议架构与调度解决方案。（仁荷大学）\n- PIM-CCA：集成优化可配置功能单元的高效 PIM 架构。（延世大学、KAIST、汉阳大学）\n- 3D-PATH：具备热感知混合键合集成的层次化 LUT（查找表）存内计算加速器。（清华大学、上海交通大学）\n- DECA：基于 3D Roofline 模型的近核 LLM 解压缩加速器。（英特尔、UIUC）\n- StreamTensor：让张量在面向 LLM 的数据流加速器中流动。（UIUC、Inspirit IoT）\n- Chameleon：面向多适配器 LLM 推理环境的自适应缓存与调度。（UIUC、IBM 研究院）\n- Coruscant：GPU 内核与稀疏张量核心协同设计，推动非结构化稀疏性在高效 LLM 推理中的应用。（马里兰大学、d-Matrix）\n- 通过 PIM 与 PNM 集成加速检索增强语言模型。（延世大学、圣塔克拉拉大学）\n- HEAT：面向 Transformer 增强图神经网络的 NPU-NDP 异构架构。（上海交通大学、中国科学院）\n- RayN：利用近内存计算加速光线追踪。（不列颠哥伦比亚大学）\n- Pimba：面向后 Transformer 大语言模型服务的存内计算加速方案。（KAIST、乌普萨拉大学、佐治亚理工学院）\n- GateBleed：利用片上加速器电源门控实现高性能并发动隐蔽攻击于 AI 系统。（北卡罗来纳州立大学、英特尔）\n- Athena：在全同态加密下加速量化卷积神经网络。（中国科学院计算技术研究所、电子科技大学）\n- ccAI：面向 AI 计算的兼容且保密的系统。（科技大学、香港理工大学、蚂蚁集团、南方科技大学）\n- Ironman：利用近内存处理加速隐私保护 AI 中的不经意传输扩展。（北京大学、密码科学技术国家重点实验室、阿里巴巴集团、清华大学）\n- S-DMA：通过空间感知预测与维度自适应数据流加速稀疏扩散模型。（东南大学）\n- LLM.265：视频编码实为张量编码。（杜克大学、卡内基梅隆大学）\n- HLX：统一流水线架构，优化混合 Transformer-Mamba 语言模型性能。（KAIST）\n- ORCHES：在协作式 GPU-PIM 异构系统上协调基于测试时计算的 LLM 推理。（佐治亚理工学院）\n- NetZIP：面向分布式大模型训练的网内无损压缩算法\u002F硬件协同设计。（UIUC、IBM 研究院）\n- 分布式训练效率特性分析：从功耗、性能与热管理视角。（佐治亚理工学院）\n- SkipReduce：利用（互连）网络稀疏性加速分布式机器学习。（KAIST、NVIDIA、汉阳大学）\n- 在环面网络上优化支持容错的 All-to-All 集体通信。（香港科技大学（广州）、华为）\n- AxCore：面向 LLM 推理的量化感知近似 GEMM 单元。（香港科技大学（广州））\n- Amove：通过细粒度分组向量化数据类型缓解异常值与显著点，从而加速 LLM。（北京航空航天大学、清华大学）\n- MX+：突破微缩格式极限，实现高效大语言模型服务。（首尔国立大学）\n- ReGate：在神经处理单元中启用电源门控。（UIUC）\n- 在成本效益型解耦数据中心中进行多维 ML 流水线优化。（宾夕法尼亚州立大学、META、IBM、AMD）\n- Crane：面向平铺架构上 DNN 推理与训练共支持的层间调度框架。（罗格斯大学、德州农工大学、NVIDIA）\n- OASIS：支持 RISCV 张量扩展指令的商用高性能终端 AI 处理器。（北京邮电大学、算能科技）\n- ELK：利用深度学习编译器技术探索多核互联 AI 芯片的效率。（UIUC、微软研究院）\n- 为机器学习赋能向量架构：用于矩阵乘法的 CAMP 架构。（巴塞罗那超级计算中心、加泰罗尼亚理工大学）\n- TAIDL：支持自动生成可扩展测试预言的张量加速器 ISA 定义语言。（UIUC）\n- 商用 SRAM 存内计算设备上真实工作负载的特性分析与优化。（康奈尔大学、南加州大学、MIT、GSI 公司）\n- SuperMesh：面向加速器的高能效集体通信。（德州农工大学）\n- BitL：用于关键路径缩减的混合位串行与并行深度学习加速器。（延世大学、三星电子）\n- HiPACK：利用 SIMD 与位级管理实现高效的子 8 位直接卷积。（新加坡国立大学、天津工业大学）\n- MCBP：利用位切片支持的稀疏性与重复性实现内存-计算高效的大语言模型推理加速器。（清华大学、上海交通大学）\n- PolymorPIC：在基于 RISC-V 的处理器中嵌入多态存内缓存，实现全栈高效 AI 推理。（上海交通大学、上海人工智能实验室）\n- MHE-TPE：面向混合精度定点张量处理引擎的多操作数高基数编码器。（中国科学技术大学、华盛顿大学、锐石创芯）\n- SMX：通用序列比对加速的异构架构。（巴塞罗那超级计算中心、UPC、康奈尔大学）\n\n### 2026 HPCA（高性能计算机架构会议）\n- Focus：面向高效视觉-语言模型的流式注意力集中架构\n- PADE：通过统一执行与阶段融合实现的无预测器稀疏注意力加速器\n- HR-DCIM：具备统一低成本迭代纠错机制的高可靠性浮点数字存内计算（Digital CIM）架构\n- WATOS：面向晶圆级芯片的大语言模型（LLM）训练策略与架构协同探索\n- ELORA：面向多LoRA大语言模型服务的高效LoRA与KV缓存管理方案\n- FACE：晶圆上完全重叠的PD调度与多层次架构协同探索\n- TEMP：面向晶圆级芯片的内存高效、物理感知张量分区映射框架\n- AQPIM：通过存内激活量化打破PIM（存内计算）容量墙以支持大语言模型\n- MoEntwine：释放晶圆级芯片在大规模专家并行推理中的潜力\n- AUM：通过共享处理器与加速单元提升大语言模型服务效率潜力\n- Uni-STC：统一稀疏张量核心（Sparse Tensor Core）\n- PIMphony：克服基于PIM的长上下文大语言模型推理系统中的带宽与容量低效问题\n- RoMe：面向大语言模型的行粒度访问内存系统\n- VAR-Turbo：通过双重冗余释放视觉自回归模型的潜力\n- V-Rex：通过动态KV缓存检索实现实时流式视频大语言模型加速\n- CoCoTree：面向可扩展存内计算的具备计算能力的集体通信架构\n- AutoGNN：端到端硬件驱动的图预处理方法，以增强图神经网络（GNN）性能\n- BitDecoding：通过低位KV缓存解锁张量核心在长上下文大语言模型中的应用\n- RPU：推理处理单元（Reasoning Processing Unit）","> **注意**：本项目 “Neural-Networks-on-Silicon” 并非传统意义上的可安装\u002F运行的开源软件工具，而是一个由冯斌图教授维护的 AI 芯片与神经网络硬件架构相关论文资源库。因此，以下“快速上手指南”实为**学术研究资源查阅与本地部署指南**，适合希望系统学习 AI 芯片前沿工作的中国开发者与研究人员。\n\n---\n\n## 环境准备\n\n- **操作系统**：任意支持 Git 与 Markdown 阅读器的操作系统（推荐 Ubuntu 20.04+ \u002F macOS \u002F Windows WSL2）\n- **前置依赖**：\n  - `git` —— 用于克隆仓库\n  - 浏览器或 Markdown 阅读器（如 Typora、VS Code）—— 用于阅读论文列表与注释\n  - （可选）PDF 阅读器 + 学术代理\u002F校园网权限 —— 用于下载原始论文\n\n> 🇨🇳 国内用户建议配置 Git 加速：\n```bash\ngit config --global url.\"https:\u002F\u002Fghproxy.com\u002Fhttps:\u002F\u002Fgithub.com\".insteadOf https:\u002F\u002Fgithub.com\n```\n\n---\n\n## 安装步骤\n\n1. 克隆本仓库到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffengbintu\u002FNeural-Networks-on-Silicon.git\n```\n\n2. 进入目录并查看内容：\n\n```bash\ncd Neural-Networks-on-Silicon\nls\n```\n\n3. （推荐）使用 VS Code 打开项目，获得最佳 Markdown 渲染体验：\n\n```bash\ncode .\n```\n\n---\n\n## 基本使用\n\n### 1. 快速浏览研究脉络\n\n打开 `README.md`，按年份（2014–2026）和会议（ISSCC, ISCA, DAC 等）查阅精选论文，重点关注带 `**粗体**` 标记的核心工作，如：\n\n- **DianNao (2014 ASPLOS)** —— 经典神经网络加速器起点\n- **Eyeriss (2016 ISSCC\u002FISCA)** —— CNN 数据流架构标杆\n- **Cambricon (2016 ISCA)** —— 神经网络指令集先驱\n\n每篇论文条目下常附有简要技术点评（如 RTL 生成、数据复用策略、近阈值设计等），帮助快速把握创新点。\n\n### 2. 按兴趣深入某一年\u002F会议\n\n例如，想研究 FPGA 上的 CNN 加速，可直接跳转至：\n\n```markdown\n### 2016 FPGA\n- **Going Deeper with Embedded FPGA Platform for Convolutional Neural Network.**\n  - *指出 CONV 层计算密集，FC 层内存密集*\n  - *动态精度量化创意（未硬件实现）*\n```\n\n点击原文中的 `[Slides]` 或 `[Demo]` 链接（如有）获取演讲材料。\n\n### 3. 关联作者主页拓展阅读\n\n访问冯教授主页获取更完整研究信息：\n\n```bash\nhttps:\u002F\u002Ffengbintu.github.io\u002Fresearch\u002F\n```\n\n---\n\n> ✅ 你已成功“上手”本资源库！下一步建议：选定一个子方向（如存算一体、稀疏加速、FPGA 映射），按年份纵向精读 3–5 篇核心论文，构建技术演进认知框架。","某AI芯片初创公司的硬件架构师正在为下一代边缘AI加速器设计能效比更高的神经网络计算单元，需快速掌握近十年关键论文的技术演进路径。\n\n### 没有 Neural-Networks-on-Silicon 时\n- 需手动在多个会议官网（如ISCA、ISSCC、MICRO）逐篇搜索“neural accelerator”相关论文，耗时且易遗漏里程碑式工作如DianNao系列。\n- 缺乏按年份和会议分类的结构化索引，难以横向对比2015年FPGA与DAC会议上不同团队的架构设计思路差异。\n- 不清楚哪些论文被领域专家实际认可，常浪费时间阅读低影响力或方法过时的研究。\n- 新人入职后需数周才能建立对AI芯片技术脉络的基本认知，拖慢项目启动节奏。\n- 无法快速定位到特定优化方向（如近传感器计算）的关键奠基论文，导致重复造轮子。\n\n### 使用 Neural-Networks-on-Silicon 后\n- 一键访问按年份+顶级会议组织的精选论文库，5分钟内即可拉出2014–2018年所有ASPLOS\u002FMICRO上的神经加速器代表作。\n- 通过目录结构直观看到ShiDianNao（2015 ISCA）如何将视觉处理推向传感器端，直接启发当前项目的存算一体设计。\n- 所有收录论文均经冯斌图教授筛选，确保是真正推动领域发展的高价值工作，避免踩坑无效方案。\n- 新成员通过浏览“My Contributions”和README背景介绍，快速理解行业权威视角下的技术演进逻辑。\n- 在设计低功耗模块时，直接参考2015 FPGA与DAC条目下两篇互补论文，融合FPGA灵活性与ASIC能效优势。\n\nNeural-Networks-on-Silicon 将碎片化的顶级会议论文转化为结构化知识地图，让AI芯片开发者用小时而非周月级时间完成关键技术调研。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffengbintu_Neural-Networks-on-Silicon_bb2b14cd.png","fengbintu","Fengbin Tu","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffengbintu_e2da0594.jpg","I'm an Assistant Professor at HKUST, with the PhD degree from Tsinghua University. My research interests include AI Chip and Computing-in-Memory.","HKUST","Hong Kong, China",null,"https:\u002F\u002Ffengbintu.github.io","https:\u002F\u002Fgithub.com\u002Ffengbintu",2075,393,"2026-04-05T08:55:17",5,"","未说明",{"notes":91,"python":89,"dependencies":92},"该项目为论文合集与研究资料库，非可运行的AI工具，无实际代码或环境依赖。内容聚焦AI芯片架构设计，适合研究人员参考阅读。",[],[13],[95,96],"deep-learning","hardware",4,"2026-03-27T02:49:30.150509","2026-04-06T07:16:11.624003",[101,106,110,115,120,124],{"id":102,"question_zh":103,"answer_zh":104,"source_url":105},464,"Vathys芯片如何支持自定义层？是否需要修改硬件？","不需要修改硬件。Vathys处理器通过计算图抽象实现可编程性，类似于数据流机器。用户只需将新层表达为基本节点（如乘法、加法等）的组合，图编译器会自动处理其余部分，类似TRIPS或EDGE架构的做法。","https:\u002F\u002Fgithub.com\u002Ffengbintu\u002FNeural-Networks-on-Silicon\u002Fissues\u002F6",{"id":107,"question_zh":108,"answer_zh":109,"source_url":105},465,"Vathys芯片在嵌入式设备上是否适用？能否只保留推理功能以节省功耗？","Vathys芯片主要优势在于10倍的能效性能乘积提升，尤其适用于ResNet等经典网络。其设计重点是减少数据移动（占功耗和性能瓶颈约90%），因此即使在高并行负载下也具备优异能效，适合嵌入式场景。但当前资料未明确说明是否支持裁剪训练逻辑模块。",{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},466,"如何获取ISSCC 2019论文《A 65nm 236.5nJ\u002FClassification Neuromorphic Processor with 7.5% Energy Overhead On-Chip Learning Using Direct Spike-Only Feedback》？","建议直接联系论文作者，询问他们是否可以提供论文副本。这是获取会议论文最常用且有效的方式。","https:\u002F\u002Fgithub.com\u002Ffengbintu\u002FNeural-Networks-on-Silicon\u002Fissues\u002F18",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},467,"除了FPGA'16那篇论文，还有哪些论文实现了在单块FPGA上运行完整CNN网络？","自FPGA'16以来，已有多个工作实现了在FPGA上加速完整CNN\u002FRNN网络，例如FPGA'16中的另一个基于OpenCL的设计，以及ICCAD'16的Caffine等。","https:\u002F\u002Fgithub.com\u002Ffengbintu\u002FNeural-Networks-on-Silicon\u002Fissues\u002F3",{"id":121,"question_zh":122,"answer_zh":123,"source_url":105},468,"Vathys芯片相比其他AI加速器的主要优势是什么？","Vathys芯片在经典网络（如ResNet）上具有10倍的能效性能乘积优势，主要通过减少数据移动实现（数据移动占功耗和性能瓶颈约90%）。详细技术原理可参考其斯坦福EE380演讲视频：https:\u002F\u002Fyoutu.be\u002F4nSn0JhZX18。",{"id":125,"question_zh":126,"answer_zh":127,"source_url":105},469,"开发者能否自行开发SDK插件来支持自己提出的私有层？","虽然原始回复未直接回答此问题，但根据其架构描述——通过图编译器将自定义层分解为基础运算节点——理论上开发者应可通过更新SDK或编写插件方式支持私有层，无需开源或修改硬件。",[]]