[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-HazyResearch--aisys-building-blocks":3,"tool-HazyResearch--aisys-building-blocks":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":78,"stars":82,"forks":83,"last_commit_at":84,"license":78,"difficulty_score":85,"env_os":86,"env_gpu":87,"env_ram":87,"env_deps":88,"category_tags":91,"github_topics":78,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":92,"updated_at":93,"faqs":94,"releases":95},1215,"HazyResearch\u002Faisys-building-blocks","aisys-building-blocks","Building blocks for foundation models.","aisys-building-blocks 是一个开源资源库，专注于高效基础模型的构建模块，汇集了前沿研究、实用课程和关键论文，帮助AI开发者快速掌握优化模型性能的核心技术。它解决了资源分散的问题，让研究人员和系统工程师能一站式获取最新进展，无需再费力搜寻零散资料。\n\n特别适合AI系统开发者和研究人员，尤其是关注硬件感知算法、注意力机制替代（如FlashAttention-2加速长序列处理）以及合成数据技术的群体。亮点包括对S4简化、Long Convolutions等创新的深入解析，源自斯坦福等机构的最新成果，覆盖从数据清洗到模型压缩的全链路实践。\n\n通过整合NeurIPS演讲、课程链接和博客文章，aisys-building-blocks 提供了清晰的学习路径，无论是初学者还是资深开发者，都能轻松跟上AI系统领域的前沿步伐，高效应用于实际项目中。","# Building Blocks for AI Systems\n\nThis is a (biased) view of great work studying the building blocks of efficient and performant foundation models.\nThis Github was originally put together as a place to aggregate materials for a [NeurIPS keynote](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2023\u002Finvited-talk\u002F73990) - but we're also hoping to highlight great work across AI Systems.\nIf you think we're missing something, please open an issue or PR!\n\nSlides from Chris Ré's NeurIPS Keynote: https:\u002F\u002Fcs.stanford.edu\u002F~chrismre\u002Fpapers\u002FNeurIPS23_Chris_Re_Keynote_DELIVERED.pptx \n\n**Courses.** Courses a great resources for getting started in this space.\nIt's great that we have so many that have open materials!\nHere's a partial list of courses -- it's biased by Stanford courses, so please reach out if you think of other resources that are helpful!\n* [Stanford CS 324 LLMs](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2022\u002F)\n* [Stanford CS 324 Advances in Foundation Models](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2023\u002F)\n* [Sasha's talk on do we need attention?](https:\u002F\u002Fgithub.com\u002Fsrush\u002Fdo-we-need-attention\u002Fblob\u002Fmain\u002FDoWeNeedAttention.pdf)\n* [Stanford CS 229S Systems for Machine Learning](https:\u002F\u002Fcs229s.stanford.edu\u002Ffall2023\u002F)\n* [MLSys Seminar](https:\u002F\u002Fmlsys.stanford.edu\u002F)\n* [Berkeley AI-Sys](https:\u002F\u002Fucbrise.github.io\u002Fcs294-ai-sys-sp22\u002F)\n* [MIT CS 6.5940](https:\u002F\u002Fhanlab.mit.edu\u002Fcourses\u002F2023-fall-65940)\n\nIf you just want to follow along on the major pieces from the talk, check out these blog posts:\n* [Data Wrangling with Foundation Models](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-13-datawrangling)\n* [FlashAttention](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-12-flashattention-long-sequences) and [FlashAttention-2](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-07-17-flash2)\n* [Simplifying S4](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-11-simplifying-s4)\n* [Long Convolutions for GPT-style Models](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-conv-tutorial)\n* [Zoology Synthetics Analysis](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology1-analysis)\n* [Zoology Based](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology2-based)\n* [Truly Sub-Quadratic Models](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-truly-subquadratic)\n\nAn older set of resources on [Data-Centric AI](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fdata-centric-ai).\n\nThe rest of this README is split up into resources by topic.\n\n**Table of contents:**\n* [Foundation Models for Systems](#foundation-models-for-systems)\n* [Hardware-Aware Algorithms](#hardware-aware-algorithms)\n* [Can We Replace Attention?](#can-we-replace-attention)\n* [Synthetics for Language Modeling](#synthetics-for-language-modeling)\n* [Truly Sub-Quadratic Models](#truly-sub-quadratic-models)\n* [Quantization, Pruning, and Distillation](#quantization-pruning-and-distillation)\n* [Systems for Inference](#systems-for-inference)\n* [High-Throughput](#high-throughput)\n* [New Data Types](#new-data-types)\n\n## Foundation Models for Systems\nFoundation models are changing the ways that we build systems for classical problems like data cleaning.\n[SIGMOD keynote](https:\u002F\u002Fcs.stanford.edu\u002F~chrismre\u002Fpapers\u002FSIGMOD-Chris-Re-DataCentric-Foundation-Models-KeyNote.pdf) on this topic.\nIhab Ilyas and Xu Chen's textbook on data cleaning: [Data Cleaning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fbook\u002F10.1145\u002F3310205).\nThe [ML for Systems](https:\u002F\u002Fmlforsystems.org\u002F) workshops and community are great.\n\n### Blog Posts\n* [Bad Data Costs the U.S. $3 Trillion Per Year](https:\u002F\u002Fhbr.org\u002F2016\u002F09\u002Fbad-data-costs-the-u-s-3-trillion-per-year)\n* [Data Wrangling with Foundation Models](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-13-datawrangling)\n* [Ask Me Anything: Leveraging Foundation Models for Private & Personalized Systems](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-04-18-personalization)\n\n### Papers\n* [Holoclean: Holistic Data Repairs with Probabilistic Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.00820)\n* [Can Foundation Models Wrangle Your Data?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.09911)\n* [Can Foundation Models Help Us Achieve Perfect Secrecy?](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13722) and [ConcurrentQA](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00580\u002F117168\u002FReasoning-over-Public-and-Private-Data-in)\n* [Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09433)\n* [Symphony: Towards Natural Language Query Answering Over Multi-Modal Data Lakes](https:\u002F\u002Fwww.cidrdb.org\u002Fcidr2023\u002Fpapers\u002Fp51-chen.pdf)\n* [CodexDB: Synthesizing Code for Query Processing from Natural Language Instructions using GPT-3 Codex](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol15\u002Fp2921-trummer.pdf)\n* [CHORUS: Foundation Models for Unified Data Discovery and Exploration](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09610)\n* [How Large Language Models Will Disrupt Data Management](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3611479.3611527)\n* [GPTuner: A Manual-Reading Database Tuning System via GPT-Guided Bayesian Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03157)\n* [Jellyfish: A Large Language Model for Data Preprocessing](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01678)\n* [Can Large Language Models Predict Data Correlations from Column Names?](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3625054.3625066)\n\n## Hardware-Aware Algorithms\n\nHardware-aware algorithms for today's ML primitives.\nCanonical resources:\n* A classic look at I\u002FO complexity, from the database folks: [The input\u002Foutput complexity of sorting and related problems](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F48529.48535).\n* The canonical book on computer architectures: [Computer Architecture: A Quantitative Approach](https:\u002F\u002Fia800203.us.archive.org\u002F31\u002Fitems\u002F2007ComputerArchitectureAQuantitativeApproach\u002F2007%20-%20Computer%20Architecture%20-%20A%20Quantitative%20Approach.pdf).\n* The canonical text book for everything FFT's: [Computational Frameworks for the Fast Fourier Transform](https:\u002F\u002Fepubs.siam.org\u002Fdoi\u002Fbook\u002F10.1137\u002F1.9781611970999).\n\n[Jim Gray's Turing Award Profile](https:\u002F\u002Famturing.acm.org\u002Faward_winners\u002Fgray_3649936.cfm).\n\n### Blog Posts\n* [Horace He's Making Deep Learning Go Brrrr from First Principles](https:\u002F\u002Fhorace.io\u002Fbrrr_intro.html)\n* [Aleksa Gordic's ELI5 for FlashAttention](https:\u002F\u002Fgordicaleksa.medium.com\u002Feli5-flash-attention-5c44017022ad)\n* [FlashAttention](https:\u002F\u002Fcrfm.stanford.edu\u002F2023\u002F01\u002F13\u002Fflashattention.html)\n* [FlashFFTConv](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-11-13-flashfftconv)\n* [Sasha's GPU Puzzles](https:\u002F\u002Fgithub.com\u002Fsrush\u002FGPU-Puzzles)\n\n### Papers\n* [FlashAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14135) and [FlashAttention-2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08691)\n* [Self-Attention Does Not Need O(N^2) Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.05682)\n* [FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05908)\n* [tcFFT: Accelerating Half-Precision FFT through Tensor Cores](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.11471)\n* [Cooley-Tukey FFT Algorithm](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCooley%E2%80%93Tukey_FFT_algorithm)\n* [Efficient Memory Management for Large Language Model Serving with PagedAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06180)\n* [Ring Attention with Blockwise Transformers for Near-Infinite Context](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01889)\n* [Faster Causal Attention Over Large Sequences Through Sparse Flash Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01160)\n* [FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06419)\n* [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.08053)\n* [HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent](https:\u002F\u002Farxiv.org\u002Fabs\u002F1106.5730)\n* [Efficiently Scaling Transformer Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.05102)\n* [Microsoft DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed)\n* [Eleuther's GPT-NeoX Repo](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Fgpt-neox)\n* [A Systematic Approach to Blocking Convolutional Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04209)\n* [TVM: An Automated End-to-End Optimizing Compiler for Deep Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04799)\n* [MegaBlocks: Efficient Sparse Training with Mixture-of-Experts](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~matei\u002Fpapers\u002F2023\u002Fmlsys_megablocks.pdf)\n* [Blockwise Self-Attention for Long Document Understanding](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.02972)\n\n## Can We Replace Attention?\n\nAlternatives to attention that scale sub-quadratically in sequence length.\nCanonical text on signal processing: [Discrete-Time Signal Processing](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F1795494).\nHigh-level overview of this space: [From Deep to Long Learning](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-03-27-long-learning).\n\n### Blog Posts\n* [What is a long convolution?](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-conv-tutorial)\n* [Can Longer Sequences Help Take the Next Leap in AI?](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-09-longer-sequences-next-leap-ai)\n* [Simplifying S4](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-11-simplifying-s4)\n* [Sasha's Great Annotated S4](https:\u002F\u002Fsrush.github.io\u002Fannotated-s4\u002F)\n* [H3: Language Modeling with State Space Models and (Almost) No Attention](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-20-h3)\n* [Hyena Blog](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-06-08-hyena-safari)\n* Mamba tweet threads by [Albert](https:\u002F\u002Ftwitter.com\u002F_albertgu\u002Fstatus\u002F1731727672286294400) and [Tri](https:\u002F\u002Ftwitter.com\u002Ftri_dao\u002Fstatus\u002F1731728602230890895)\n* [StripedHyena-7B](https:\u002F\u002Fwww.together.ai\u002Fblog\u002Fstripedhyena-7b)\n* [Zoology](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology0-intro)\n* [Zoology Analysis](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology1-analysis)\n* [Based Architecture](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology2-based)\n\n### Papers\n* [Long Range Arena](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04006)\n* [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00752) and [code](https:\u002F\u002Fgithub.com\u002Fstate-spaces\u002Fmamba)\n* [Zoology: Measuring and improving recall in efficient language models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04927)\n* [RWKV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13048) and [code](https:\u002F\u002Fgithub.com\u002FBlinkDL\u002FRWKV-LM)\n* [Efficiently Modeling Long Sequences with Structured State Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00396)\n* [Long Range Language Modeling via Gated State Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13947)\n* [Hungry Hungry Hippos: Towards Language Modeling with State Space Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14052)\n* [Hyena Hierarchy: Towards Larger Convolutional Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.10866)\n* [Simplified State Space Layers for Sequence Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.04933)\n* [On the Parameterization and Initialization of Diagonal State Space Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11893)\n* [Mega: Moving Average Equipped Gated Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10655)\n* [Simple Hardware-Efficient Long Convolutions for Sequence Modeling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06646)\n* [Diagonal State Spaces are as Effective as Structured State Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14343)\n* [Retentive Network: A Successor to Transformer for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08621)\n* [Resurrecting Recurrent Neural Networks for Long Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06349)\n* [MultiResFormer: Transformer with Adaptive Multi-Resolution Modeling for General Time Series Forecasting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18780)\n* [CKConv: Continuous Kernel Convolution For Sequential Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.02611)\n* [Pretraining Without Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10544)\n* [Diffusion Models Without Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18257)\n* [Liquid Structural State-Space Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.12951)\n* [Fourier Neural Operator for Parametric Partial Differential Equations](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08895)\n\n### Attention Approximations\nThere's also a great literature around approximating attention (sparse, low-rank, etc).\nJust as exciting!\nHere's a partial list of great ideas in this area:\n* [Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16236)\n* [Reformer: The Efficient Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.04451)\n* [Rethinking Attention with Performers](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.14794)\n* [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.03902)\n* [Linformer: Self-Attention with Linear Complexity](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.04768)\n* [Skyformer: Remodel Self-Attention with Gaussian Kernel and Nyström Method](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00035)\n* [Scatterbrain: Unifying Sparse and Low-rank Attention Approximation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.15343)\n* [Big Bird: Transformers for Longer Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.14062)\n* [Luna: Linear Unified Nested Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01540)\n* [FNet: Mixing Tokens with Fourier Transforms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03824)\n* [The Devil in Linear Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10340)\n* [cosFormer: Rethinking Softmax in Attention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08791)\n\n## Synthetics for Language Modeling\nIn research on efficient language models, synthetic tasks (_e.g._ associative recall) are crucial for understanding and debugging issues before scaling up to expensive pretraining runs.  \n\n### Code\nWe've created a very simple GitHub repo with a simple playground for understanding and testing langauge model architectures on synthetic tasks: **[HazyResearch\u002Fzoology]( https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fzoology)**.\n\n### Blog Posts\n* [Zoology blog post on synthetics](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology1-analysis)\n* [H3 blog post](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-20-h3) section on associative recall\n* [Anthropic's great explainer of associative recall in *induction heads*](https:\u002F\u002Ftransformer-circuits.pub\u002F2022\u002Fin-context-learning-and-induction-heads\u002Findex.html#definition-of-induction-heads)\n\n### Papers\n* [Zoology section 3-4](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04927)\n* [H3 section 3.1](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14052)\n* [In-context Learning and Induction Heads](https:\u002F\u002Ftransformer-circuits.pub\u002F2022\u002Fin-context-learning-and-induction-heads\u002Findex.html)\n* [Associative Long Short-Term Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.03032)\n* [Using Fast Weights to Attend to the Recent Past](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.06258)\n* [Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06493)\n* [Self-Attentive Associative Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03519)\n* [Neural Turing Machines](https:\u002F\u002Farxiv.org\u002Fabs\u002F1410.5401)\n* [Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks](https:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2019\u002Fhash\u002F952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)\n* Synthetic tasks go all the way back to LSTMs: [Long Short-Term Memory](https:\u002F\u002Fdeeplearning.cs.cmu.edu\u002FF23\u002Fdocument\u002Freadings\u002FLSTM.pdf)\n\n## Truly Sub-Quadratic Models\n\nML models are quadratic along another dimension -- model width.\nCan we develop models that grow sub-quadratically with model width?\n\nThe canonical textbook for a lot of this stuff: [Structured Matrices and Polynomials](https:\u002F\u002Flink.springer.com\u002Fbook\u002F10.1007\u002F978-1-4612-0129-8).\n\n### Blog Posts\n* [Towards Truly Subquadratic Models](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-truly-subquadratic)\n* [M2-BERT: Revisiting BERT, Without Attention or MLPs](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-07-25-m2-bert)\n* [Pixelated Butterfly: Simple and Efficient Sparse Training for Neural Network Models](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-01-17-Sparsity-3-Pixelated-Butterfly)\n* [Butterflies Are All You Need: A Universal Building Block for Structured Linear Maps](https:\u002F\u002Fdawn.cs.stanford.edu\u002F2019\u002F06\u002F13\u002Fbutterfly\u002F)\n\n### Papers\n* [Monarch Mixer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12109)\n* [Monarch](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00595)\n* [Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.00029)\n* [Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.05895)\n* [Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.14966)\n* [Fast Algorithms for Spherical Harmonic Expansions](http:\u002F\u002Ftygert.com\u002Fbutterfly.pdf)\n* [Butterfly Factorization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.01379)\n* [GLaM: Efficient Scaling of Language Models with Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.06905)\n* [A Two Pronged Progress in Structured Dense Matrix Multiplication](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01569)\n\n## Quantization, Pruning, and Distillation\nQuantization, pruning, and distillation are great techniques to improve efficiency.\nHere's just a short overview of some of the ideas here:\n* [QLoRA: Efficient Finetuning of Quantized LLMs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14314)\n* [The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.03635)\n* [Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding](https:\u002F\u002Farxiv.org\u002Fabs\u002F1510.00149)\n* [Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07565)\n* [QuIP#: QuIP with Lattice Codebooks](https:\u002F\u002Fcornell-relaxml.github.io\u002Fquip-sharp\u002F)\n* [SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.10438)\n* [SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning](https:\u002F\u002Fhanlab.mit.edu\u002Fprojects\u002Fspatten)\n* [Accelerating Inference with Sparsity Using the NVIDIA Ampere Architecture and NVIDIA TensorRT](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Faccelerating-inference-with-sparsity-using-ampere-and-tensorrt\u002F)\n* [SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03078)\n* [LoRA: Low-Rank Adaptation of Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09685)\n* [MCUNet: Tiny Deep Learning on IoT Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.10319)\n* [MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fmongoose)\n\n## Systems for Inference\nInference is an increasingly important cost for LLMs: a model will be served many more times than it is trained.\nSystems for inference are an increasingly important problem.\nHere's some papers and posts on the topic, there's a lot to do!\n* [Fast Transformer Decoding: One Write-Head is All You Need](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.02150)\n* [GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13245)\n* [Flash-Decoding for long-context inference](https:\u002F\u002Fcrfm.stanford.edu\u002F2023\u002F10\u002F12\u002Fflashdecoding.html)\n* [vLLM](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm)\n* [Fast Inference from Transformers via Speculative Decoding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.17192)\n* [MatFormer: Nested Transformer for Elastic Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07707)\n* [Efficient Streaming Language Models with Attention Sinks](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17453)\n* [Hugging Face TGI](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftext-generation-inference\u002Findex)\n* [NVIDIA TensorRT](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT)\n* [Together Inference Engine](https:\u002F\u002Fwww.together.ai\u002Fblog\u002Ftogether-inference-engine-v1)\n* [Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18780)\n* [H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14048)\n\n## High-Throughput\nFoundation models will increasingly be used to serve back-of-house tasks like document processing (not just chat interfaces).\nThese will require different systems than our current inference solutions.\nThis work is still very new, but hopefully there's a lot more to come soon!\n* [Batch computing and the coming age of AI systems](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-04-12-batch).\n* [FlexGen: High-throughput Generative Inference of Large Language Models with a Single GPU](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06865)\n* [Evaporate: Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol17\u002Fp92-arora.pdf)\n\n## New Data Types\nMost ML models focus on text or images, but there's a large variety of other modalities that present unique challenges (e.g., long context).\nNew modalities will drive advances in model architectures and systems.\nA few modalities compiled below:\n* DNA: [HyenaDNA paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15794) and [blog](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-06-29-hyena-dna)\n* [SSMs for Video](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14526)\n* [SpaceTime: Effectively Modeling Time Series with Simple Discrete State Spaces](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09489) [[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09489)] [[code](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fspacetime\u002Ftree\u002Fmain)], [[demo](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1dyR7ZGnjNfS2GMjRUfDzujQLhxSo-Xsk?usp=sharing)]\n* [Recurrent Distance-Encoding Neural Networks for Graph Representation Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01538)\n* [Modeling Multivariate Biosignals With Graph Neural Networks and Structured State Space Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11176)\n* [Self-Supervised Graph Neural Networks for Improved Electroencephalographic Seizure Analysis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08336)\n* [Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11417)\n* [scHyena: Foundation Model for Full-Length Single-Cell RNA-Seq Analysis in Brain](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02713)\n","# 人工智能系统的构建模块\n\n这是对高效且高性能基础模型构建模块研究中优秀工作的（带有偏见的）综述。\n这个 GitHub 仓库最初是为了汇集 [NeurIPS 主题演讲](https:\u002F\u002Fneurips.cc\u002Fvirtual\u002F2023\u002Finvited-talk\u002F73990) 的相关资料而创建的——但我们同时也希望突出展示人工智能系统领域的优秀工作。\n如果您认为我们遗漏了某些内容，请随时提交 issue 或 pull request！\n\nChris Ré 在 NeurIPS 主题演讲中的幻灯片：https:\u002F\u002Fcs.stanford.edu\u002F~chrismre\u002Fpapers\u002FNeurIPS23_Chris_Re_Keynote_DELIVERED.pptx \n\n**课程。** 课程是进入这一领域的好资源。\n令人欣喜的是，目前有许多课程公开了学习资料！\n以下是一份不完全的课程列表——其中偏向于斯坦福大学的课程，如果您知道其他有用的资源，请联系我们！\n* [斯坦福 CS 324 大型语言模型](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2022\u002F)\n* [斯坦福 CS 324 基础模型进展](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2023\u002F)\n* [Sasha 关于“我们真的需要注意力机制吗？”的演讲](https:\u002F\u002Fgithub.com\u002Fsrush\u002Fdo-we-need-attention\u002Fblob\u002Fmain\u002FDoWeNeedAttention.pdf)\n* [斯坦福 CS 229S 机器学习系统](https:\u002F\u002Fcs229s.stanford.edu\u002Ffall2023\u002F)\n* [MLSys 研讨会](https:\u002F\u002Fmlsys.stanford.edu\u002F)\n* [伯克利 AI-Sys](https:\u002F\u002Fucbrise.github.io\u002Fcs294-ai-sys-sp22\u002F)\n* [MIT CS 6.5940](https:\u002F\u002Fhanlab.mit.edu\u002Fcourses\u002F2023-fall-65940)\n\n如果您只想了解演讲中的主要内容，可以参考这些博客文章：\n* [使用基础模型进行数据清洗](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-13-datawrangling)\n* [FlashAttention](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-12-flashattention-long-sequences) 和 [FlashAttention-2](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-07-17-flash2)\n* [简化 S4](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-11-simplifying-s4)\n* [用于 GPT 类模型的长卷积](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-conv-tutorial)\n* [Zoology 合成数据分析](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology1-analysis)\n* [基于 Zoology 的模型](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology2-based)\n* [真正的次二次复杂度模型](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-truly-subquadratic)\n\n关于 [以数据为中心的人工智能](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fdata-centric-ai) 的较早资源集合。\n\n本 README 的其余部分按主题划分了各类资源。\n\n**目录：**\n* [面向系统的基础模型](#foundation-models-for-systems)\n* [硬件感知算法](#hardware-aware-algorithms)\n* [我们可以取代注意力机制吗？](#can-we-replace-attention)\n* [用于语言建模的合成数据](#synthetics-for-language-modeling)\n* [真正的次二次复杂度模型](#truly-sub-quadratic-models)\n* [量化、剪枝与蒸馏](#quantization-pruning-and-distillation)\n* [推理系统](#systems-for-inference)\n* [高吞吐量](#high-throughput)\n* [新型数据类型](#new-data-types)\n\n## 面向系统的基础模型\n基础模型正在改变我们构建传统问题系统的方式，例如数据清洗。\n关于此主题的 [SIGMOD 主题演讲](https:\u002F\u002Fcs.stanford.edu\u002F~chrismre\u002Fpapers\u002FSIGMOD-Chris-Re-DataCentric-Foundation-Models-KeyNote.pdf)。\nIhab Ilyas 和 Xu Chen 的数据清洗教材：[Data Cleaning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fbook\u002F10.1145\u002F3310205)。\n[面向系统的机器学习](https:\u002F\u002Fmlforsystems.org\u002F) 研讨会和社区也非常出色。\n\n### 博客文章\n* [糟糕的数据每年让美国损失 3 万亿美元](https:\u002F\u002Fhbr.org\u002F2016\u002F09\u002Fbad-data-costs-the-u-s-3-trillion-per-year)\n* [使用基础模型进行数据清洗](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-13-datawrangling)\n* [有问必答：利用基础模型构建私密且个性化的系统](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-04-18-personalization)\n\n### 论文\n* [Holoclean：基于概率推理的整体数据修复](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.00820)\n* [基础模型能帮你清洗数据吗？](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.09911)\n* [基础模型能帮助我们实现完美保密吗？](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.13722) 以及 [ConcurrentQA](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00580\u002F117168\u002FReasoning-over-Public-and-Private-Data-in)\n* [语言模型支持简单系统，用于生成异构数据湖的结构化视图](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09433)\n* [Symphony：迈向多模态数据湖上的自然语言查询回答](https:\u002F\u002Fwww.cidrdb.org\u002Fcidr2023\u002Fpapers\u002Fp51-chen.pdf)\n* [CodexDB：利用 GPT-3 Codex 根据自然语言指令合成查询处理代码](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol15\u002Fp2921-trummer.pdf)\n* [CHORUS：用于统一数据发现与探索的基础模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09610)\n* [大型语言模型将如何颠覆数据管理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3611479.3611527)\n* [GPTuner：通过 GPT 引导的贝叶斯优化实现手动阅读数据库调优的系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.03157)\n* [Jellyfish：用于数据预处理的大型语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01678)\n* [大型语言模型能否仅凭列名预测数据相关性？](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.14778\u002F3625054.3625066)\n\n## 硬件感知算法\n\n针对当今机器学习原语的硬件感知算法。\n经典参考资料：\n* 数据库领域关于 I\u002FO 复杂性的经典研究：[排序及相关问题的输入输出复杂性](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F48529.48535)。\n* 计算机体系结构的经典书籍：[计算机体系结构：定量方法](https:\u002F\u002Fia800203.us.archive.org\u002F31\u002Fitems\u002F2007ComputerArchitectureAQuantitativeApproach\u002F2007%20-%20Computer%20Architecture%20-%20A%20Quantitative%20Approach.pdf)。\n* 关于 FFT 的经典教材：[快速傅里叶变换的计算框架](https:\u002F\u002Fepubs.siam.org\u002Fdoi\u002Fbook\u002F10.1137\u002F1.9781611970999)。\n\n[Jim Gray 的图灵奖简介](https:\u002F\u002Famturing.acm.org\u002Faward_winners\u002Fgray_3649936.cfm)。\n\n### 博客文章\n* [Horace He 从基本原理出发让深度学习“轰鸣”起来](https:\u002F\u002Fhorace.io\u002Fbrrr_intro.html)\n* [Aleksa Gordic 对 FlashAttention 的通俗解释](https:\u002F\u002Fgordicaleksa.medium.com\u002Feli5-flash-attention-5c44017022ad)\n* [FlashAttention](https:\u002F\u002Fcrfm.stanford.edu\u002F2023\u002F01\u002F13\u002Fflashattention.html)\n* [FlashFFTConv](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-11-13-flashfftconv)\n* [Sasha 的 GPU 谜题](https:\u002F\u002Fgithub.com\u002Fsrush\u002FGPU-Puzzles)\n\n### 论文\n* [FlashAttention](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.14135) 和 [FlashAttention-2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08691)\n* [自注意力并不需要 O(N^2) 的内存](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.05682)\n* [FlashFFTConv：利用 Tensor Cores 实现长序列的高效卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05908)\n* [tcFFT：通过 Tensor Cores 加速半精度 FFT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.11471)\n* [库利–图基 FFT 算法](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCooley%E2%80%93Tukey_FFT_algorithm)\n* [使用 PagedAttention 进行大型语言模型推理时的高效内存管理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06180)\n* [基于块的 Transformer 的环形注意力，实现近乎无限的上下文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.01889)\n* [通过稀疏 Flash Attention 提高大序列上的因果注意力效率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01160)\n* [FLAT：一种优化的数据流，用于缓解注意力瓶颈](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06419)\n* [Megatron-LM：利用模型并行训练数十亿参数的语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.08053)\n* [HOGWILD!：一种无锁的随机梯度下降并行化方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F1106.5730)\n* [高效扩展 Transformer 推理规模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.05102)\n* [微软 DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed)\n* [Eleuther 的 GPT-NeoX 仓库](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Fgpt-neox)\n* [卷积神经网络分块的系统性方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.04209)\n* [TVM：面向深度学习的自动化端到端优化编译器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.04799)\n* [MegaBlocks：混合专家架构下的高效稀疏训练](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~matei\u002Fpapers\u002F2023\u002Fmlsys_megablocks.pdf)\n* [用于长文档理解的分块自注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.02972)\n\n## 我们能取代注意力机制吗？\n\n在序列长度上呈次二次增长的注意力替代方案。\n信号处理的经典教材：[离散时间信号处理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.5555\u002F1795494)。\n该领域的高层次综述：[从深度学习到长序列学习](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-03-27-long-learning)。\n\n### 博客文章\n* [什么是长卷积？](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-conv-tutorial)\n* [更长的序列能否帮助人工智能实现下一次飞跃？](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-09-longer-sequences-next-leap-ai)\n* [简化 S4](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-11-simplifying-s4)\n* [Sasha 的 S4 详细注释版](https:\u002F\u002Fsrush.github.io\u002Fannotated-s4\u002F)\n* [H3：使用状态空间模型进行语言建模，几乎无需注意力](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-20-h3)\n* [Hyena 博客](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-06-08-hyena-safari)\n* Mamba 的推特系列帖子，由 [Albert](https:\u002F\u002Ftwitter.com\u002F_albertgu\u002Fstatus\u002F1731727672286294400) 和 [Tri](https:\u002F\u002Ftwitter.com\u002Ftri_dao\u002Fstatus\u002F1731728602230890895) 发布\n* [StripedHyena-7B](https:\u002F\u002Fwww.together.ai\u002Fblog\u002Fstripedhyena-7b)\n* [Zoology](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology0-intro)\n* [Zoology 分析](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology1-analysis)\n* [Based 架构](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology2-based)\n\n### 论文\n* [Long Range Arena](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.04006)\n* [Mamba：具有选择性状态空间的线性时间序列建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.00752) 和 [代码](https:\u002F\u002Fgithub.com\u002Fstate-spaces\u002Fmamba)\n* [Zoology：衡量和改进高效语言模型中的召回率](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04927)\n* [RWKV](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13048) 和 [代码](https:\u002F\u002Fgithub.com\u002FBlinkDL\u002FRWKV-LM)\n* [利用结构化状态空间高效建模长序列](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00396)\n* [通过门控状态空间实现长距离语言建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.13947)\n* [Hungry Hungry Hippos：迈向使用状态空间模型的语言建模](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14052)\n* [Hyena 层次结构：迈向更大的卷积语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.10866)\n* [用于序列建模的简化状态空间层](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.04933)\n* [关于对角线状状态空间模型的参数化和初始化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11893)\n* [Mega：配备移动平均的门控注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.10655)\n* [用于序列建模的简单且硬件高效的长卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.06646)\n* [对角线状状态空间与结构化状态空间同样有效](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14343)\n* [Retentive Network：大型语言模型的 Transformer 替代品](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08621)\n* [为长序列复活循环神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06349)\n* [MultiResFormer：具有自适应多分辨率建模能力的通用时间序列预测 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18780)\n* [CKConv：用于序列数据的连续核卷积](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.02611)\n* [无需注意力的预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10544)\n* [无需注意力的扩散模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.18257)\n* [液态结构状状态空间模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.12951)\n* [用于参数化偏微分方程的傅里叶神经算子](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08895)\n\n### 注意力近似\n关于注意力近似的文献也非常丰富（稀疏、低秩等）。同样令人兴奋！以下是该领域的一些优秀想法：\n* [Transformer 就是 RNN：具有线性注意力的快速自回归 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.16236)\n* [Reformer：高效的 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.04451)\n* [用 Performers 重新思考注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.14794)\n* [Nyströmformer：基于 Nyström 方法的自注意力近似算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.03902)\n* [Linformer：具有线性复杂度的自注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.04768)\n* [Skyformer：利用高斯核和 Nyström 方法重塑自注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00035)\n* [Scatterbrain：统一稀疏和低秩注意力近似](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.15343)\n* [Big Bird：适用于更长序列的 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.14062)\n* [Luna：线性的统一嵌套注意力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.01540)\n* [FNet：用傅里叶变换混合 token](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03824)\n* [线性 Transformer 中的魔鬼](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.10340)\n* [cosFormer：重新思考注意力中的 Softmax](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.08791)\n\n## 用于语言建模的合成任务\n在高效语言模型的研究中，合成任务（例如联想回忆）对于在大规模昂贵的预训练之前理解和调试问题至关重要。\n\n### 代码\n我们创建了一个非常简单的 GitHub 仓库，其中包含一个用于理解和测试语言模型架构在合成任务上表现的简单实验平台：**[HazyResearch\u002Fzoology]( https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fzoology)**。\n\n### 博客文章\n* [Zoology 关于合成数据的博客文章](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-zoology1-analysis)\n* [H3 博客文章](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-20-h3) 中关于联想记忆的部分\n* [Anthropic 对 *诱导头* 中联想记忆机制的精彩解释](https:\u002F\u002Ftransformer-circuits.pub\u002F2022\u002Fin-context-learning-and-induction-heads\u002Findex.html#definition-of-induction-heads)\n\n### 论文\n* [Zoology 第 3–4 节](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.04927)\n* [H3 第 3.1 节](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.14052)\n* [上下文学习与诱导头](https:\u002F\u002Ftransformer-circuits.pub\u002F2022\u002Fin-context-learning-and-induction-heads\u002Findex.html)\n* [联想长短期记忆](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.03032)\n* [利用快速权重关注近期历史](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.06258)\n* [在循环神经网络中学习更新自联想记忆以提升序列记忆能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06493)\n* [自注意力联想记忆](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03519)\n* [神经图灵机](https:\u002F\u002Farxiv.org\u002Fabs\u002F1410.5401)\n* [勒让德记忆单元：循环神经网络中的连续时间表示](https:\u002F\u002Fpapers.nips.cc\u002Fpaper_files\u002Fpaper\u002F2019\u002Fhash\u002F952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)\n* 合成任务的历史可以追溯到 LSTM：[长短期记忆](https:\u002F\u002Fdeeplearning.cs.cmu.edu\u002FF23\u002Fdocument\u002Freadings\u002FLSTM.pdf)\n\n## 真正的次二次模型\n\n机器学习模型在另一个维度上是二次的——即模型宽度。我们能否开发出随着模型宽度增长呈次二次增长的模型呢？\n\n关于这些内容的经典教材是：[结构化矩阵与多项式](https:\u002F\u002Flink.springer.com\u002Fbook\u002F10.1007\u002F978-1-4612-0129-8)。\n\n### 博客文章\n* [迈向真正的次二次模型](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-12-11-truly-subquadratic)\n* [M2-BERT：重访 BERT，无需注意力机制或 MLP](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-07-25-m2-bert)\n* [像素化蝴蝶：神经网络模型的简单高效稀疏训练](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-01-17-Sparsity-3-Pixelated-Butterfly)\n* [蝴蝶就是全部所需：结构化线性映射的通用构建模块](https:\u002F\u002Fdawn.cs.stanford.edu\u002F2019\u002F06\u002F13\u002Fbutterfly\u002F)\n\n### 论文\n* [君主混合器](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12109)\n* [君主](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00595)\n* [像素化蝴蝶：神经网络模型的简单高效稀疏训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.00029)\n* [利用蝴蝶分解学习线性变换的快速算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.05895)\n* [万花筒：所有结构化线性映射的高效可学习表示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.14966)\n* [球面谐波展开的快速算法](http:\u002F\u002Ftygert.com\u002Fbutterfly.pdf)\n* [蝴蝶分解](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.01379)\n* [GLaM：基于专家混合的高效语言模型扩展](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.06905)\n* [结构化稠密矩阵乘法的双管齐下进展](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01569)\n\n## 量化、剪枝与蒸馏\n量化、剪枝和蒸馏是提升效率的优秀技术。以下仅是对其中一些想法的简要概述：\n* [QLoRA：量化大语言模型的高效微调](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14314)\n* [彩票假说：寻找稀疏且可训练的神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.03635)\n* [深度压缩：通过剪枝、训练后量化和霍夫曼编码压缩深度神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1510.00149)\n* [通过逐层最优脑外科手术学习剪枝深度神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07565)\n* [QuIP#：带有格子码书的 QuIP](https:\u002F\u002Fcornell-relaxml.github.io\u002Fquip-sharp\u002F)\n* [SmoothQuant：大型语言模型的准确高效训练后量化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.10438)\n* [SpAtten：具有级联标记和头部剪枝的高效稀疏注意力架构](https:\u002F\u002Fhanlab.mit.edu\u002Fprojects\u002Fspatten)\n* [利用 NVIDIA Ampere 架构和 NVIDIA TensorRT 加速稀疏推理](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Faccelerating-inference-with-sparsity-using-ampere-and-tensorrt\u002F)\n* [SpQR：用于近无损 LLM 权重压缩的稀疏量化表示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03078)\n* [LoRA：大型语言模型的低秩适应](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.09685)\n* [MCUNet：物联网设备上的微型深度学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.10319)\n* [MONGOOSE：用于高效神经网络训练的可学习 LSH 框架](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fmongoose)\n\n## 推理系统\n对于大语言模型而言，推理成本日益重要：模型被服务的次数远远超过其训练次数。因此，推理系统成为一个越来越重要的问题。以下是一些相关论文和帖子，这一领域仍有大量工作要做！\n* [快速 Transformer 解码：只需一个写头即可](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.02150)\n* [GQA：从多头检查点训练广义多查询 Transformer 模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13245)\n* [针对长上下文推理的闪速解码](https:\u002F\u002Fcrfm.stanford.edu\u002F2023\u002F10\u002F12\u002Fflashdecoding.html)\n* [vLLM](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm)\n* [通过推测性解码实现 Transformer 的快速推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.17192)\n* [MatFormer：用于弹性推理的嵌套 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.07707)\n* [具有注意力汇流的高效流式语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.17453)\n* [Hugging Face TGI](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftext-generation-inference\u002Findex)\n* [NVIDIA TensorRT](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT)\n* [Together 推理引擎](https:\u002F\u002Fwww.together.ai\u002Fblog\u002Ftogether-inference-engine-v1)\n* [笑猎豹酿酒厂：从卷积中提取紧凑递归](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.18780)\n* [H2O：用于大型语言模型高效生成式推理的重击者 Oracle](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14048)\n\n## 高吞吐\n基础模型将越来越多地用于支持后台任务，例如文档处理（而不仅仅是聊天界面）。这些任务需要与我们当前的推理解决方案不同的系统。这项工作目前还处于非常早期的阶段，但有望很快取得更多进展！\n* [批处理计算与即将到来的AI系统时代](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-04-12-batch)。\n* [FlexGen：单张GPU实现大语言模型的高吞吐生成式推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06865)\n* [Evaporate：语言模型助力构建简单系统，用于生成异构数据湖的结构化视图](https:\u002F\u002Fwww.vldb.org\u002Fpvldb\u002Fvol17\u002Fp92-arora.pdf)\n\n## 新的数据类型\n大多数机器学习模型专注于文本或图像，但还有许多其他模态，它们带来了独特的挑战（例如长上下文）。新的模态将推动模型架构和系统的进步。以下是一些已整理的模态：\n* DNA：[HyenaDNA论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15794)及[博客文章](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-06-29-hyena-dna)\n* [用于视频的SSM](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14526)\n* [SpaceTime：利用简单的离散状态空间有效建模时间序列](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09489) [[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09489)] [[代码](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Fspacetime\u002Ftree\u002Fmain)], [[演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1dyR7ZGnjNfS2GMjRUfDzujQLhxSo-Xsk?usp=sharing)]\n* [用于图表示学习的循环距离编码神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.01538)\n* [利用图神经网络和结构化状态空间模型建模多变量生物信号](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11176)\n* [自监督图神经网络用于改进脑电图癫痫分析](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08336)\n* [基于广泛神经影像数据的脑动力学自监督学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.11417)\n* [scHyena：用于全长度单细胞RNA测序分析的基础模型（应用于大脑）](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.02713)","# aisys-building-blocks 快速上手指南\n\n## 环境准备\n- **系统要求**：任意支持 Git 的系统（Linux\u002FmacOS\u002FWindows）\n- **前置依赖**：Git（确保已安装，可通过 `git --version` 验证）\n\n## 安装步骤\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Faisys-building-blocks\u002Faisys-building-blocks.git\ncd aisys-building-blocks\n```\n\n> 若 GitHub 访问缓慢，推荐使用国内镜像加速：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Faisys-building-blocks\u002Faisys-building-blocks.git\n> ```\n\n## 基本使用\n1. 打开 `README.md` 文件（本地或浏览器访问 GitHub 仓库）。\n2. 浏览核心资源分类，快速获取关键内容：\n   - **课程资源**：如 [Stanford CS 324 LLMs](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2022\u002F)、[MLSys Seminar](https:\u002F\u002Fmlsys.stanford.edu\u002F)\n   - **关键博客**：如 [FlashAttention](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-12-flashattention-long-sequences)、[Simplifying S4](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2022-06-11-simplifying-s4)\n   - **论文索引**：按主题分类（如 `Hardware-Aware Algorithms`、`Can We Replace Attention?`）\n3. **快速入门示例**：  \n   要了解高效注意力机制，直接访问 [FlashAttention 博客](https:\u002F\u002Fhazyresearch.stanford.edu\u002Fblog\u002F2023-01-12-flashattention-long-sequences)。\n\n> 说明：此仓库为资源聚合库，无可执行代码。所有内容均通过链接指向公开学术资料。","某电商公司的AI工程师团队正开发智能客服系统，需处理每日百万级用户评论数据以提升意图识别准确率，但面临基础模型开发效率低下的挑战。\n\n### 没有 aisys-building-blocks 时\n- 数据清洗依赖人工规则，处理10万条评论需2周，且漏检率高达15%\n- 模型优化缺乏高效方案，尝试多种注意力机制导致训练时间翻倍\n- 资源分散在GitHub、博客和论文中，搜索整合耗时占开发周期40%\n- 技术选型盲目，选错架构使模型精度波动达12%\n- 代码重复开发，团队协作效率低下，平均迭代周期长达3周\n\n### 使用 aisys-building-blocks 后\n- 直接应用Data Wrangling with Foundation Models方法，数据清洗时间压缩至2天，漏检率降至3%\n- 采用FlashAttention-2优化，训练速度提升3倍，GPU资源消耗降低50%\n- 通过整合的资源列表快速定位关键论文（如Zoology分析），技术选型精准度提升至90%\n- 基于社区验证的S4简化方案，模型精度稳定提升8%，波动控制在5%内\n- 复用已验证的代码块（如Long Convolutions实现），开发周期缩短至1周\n\naisys-building-blocks将基础模型构建的碎片化知识转化为可直接复用的系统组件，让AI开发从“试错摸索”跃升为“高效迭代”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHazyResearch_aisys-building-blocks_6fcb9c04.png","HazyResearch","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FHazyResearch_5f558f19.png","We are a CS research group led by Prof. Chris Ré.",null,"contact.hazy@gmail.com","https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fchrismre\u002F","https:\u002F\u002Fgithub.com\u002FHazyResearch",613,28,"2026-04-03T12:10:25",1,"","未说明",{"notes":89,"python":87,"dependencies":90},"该仓库为资源聚合列表，无需安装运行环境，仅提供学习资料链接",[],[13,51],"2026-03-27T02:49:30.150509","2026-04-06T07:16:16.981720",[],[]]