[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Hannibal046--Awesome-LLM":3,"tool-Hannibal046--Awesome-LLM":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",142651,2,"2026-04-06T23:34:12",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":73,"owner_location":73,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":76,"languages":73,"stars":77,"forks":78,"last_commit_at":79,"license":80,"difficulty_score":81,"env_os":82,"env_gpu":83,"env_ram":83,"env_deps":84,"category_tags":87,"github_topics":73,"view_count":32,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":89,"updated_at":90,"faqs":91,"releases":122},4746,"Hannibal046\u002FAwesome-LLM","Awesome-LLM","Awesome-LLM: a curated list of Large Language Model","Awesome-LLM 是一份精心整理的大型语言模型（LLM）资源清单，旨在为快速演进的 AI 领域提供一站式导航。面对大模型技术爆发式增长带来的信息过载与资源分散问题，它系统性地汇集了从里程碑式学术论文、开源模型权重、训练框架、部署工具，到评估基准、实战教程及行业洞察等全方位内容。无论是追踪 DeepSeek-R1、Qwen2.5-Max 等前沿项目，还是回顾 Transformer、BERT 等奠基性研究，用户都能在此找到权威链接。\n\n这份资源特别适合 AI 研究人员、开发者及技术爱好者使用。研究人员可借此高效梳理技术演进脉络，获取最新论文；开发者能快速定位所需的训练框架、推理工具及开源代码，加速项目落地；学习者则能通过 curated 的课程与书籍入门大模型领域。其核心亮点在于“精选”与“全面”：不仅覆盖技术全链路，还特别关注如 ChatGPT 相关研究及公开可用的 API 与检查点，并持续更新热门项目动态。Awesome-LLM 如同大模型领域的专业地图，帮助用户在浩瀚的技术海洋中精准定位所需资源，降低学习与研发门槛，促进社区知识共享。","\n# Awesome-LLM [![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHannibal046_Awesome-LLM_readme_9a36c1759d7e.gif)\n\n🔥 Large Language Models(LLM) have taken the ~~NLP community~~ ~~AI community~~ **the Whole World** by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs.\n\n## Trending LLM Projects\n\n- [TinyZero](https:\u002F\u002Fgithub.com\u002FJiayi-Pan\u002FTinyZero) - Clean, minimal, accessible reproduction of DeepSeek R1-Zero\n- [open-r1](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fopen-r1) - Fully open reproduction of DeepSeek-R1\n- [DeepSeek-R1](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1) - First-generation reasoning models from DeepSeek.\n- [Qwen2.5-Max](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-max\u002F) - Exploring the Intelligence of Large-scale MoE Model.\n- [OpenAI o3-mini](https:\u002F\u002Fopenai.com\u002Findex\u002Fopenai-o3-mini\u002F) - Pushing the frontier of cost-effective reasoning.\n- [DeepSeek-V3](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-V3) - First open-sourced GPT-4o level model.\n- [Kimi-K2](https:\u002F\u002Fgithub.com\u002FMoonshotAI\u002FKimi-K2) - MoE language model with 32B active and 1T total parameters.\n\n\n## Table of Content\n- [Awesome-LLM ](#awesome-llm-)\n  - [Milestone Papers](#milestone-papers)\n  - [Other Papers](#other-papers)\n  - [LLM Leaderboard](#llm-leaderboard)\n  - [Open LLM](#open-llm)\n  - [LLM Data](#llm-data)\n  - [LLM Evaluation](#llm-evaluation)\n  - [LLM Training Framework](#llm-training-frameworks)\n  - [LLM Inference](#llm-inference)\n  - [LLM Applications](#llm-applications)\n  - [LLM Tutorials and Courses](#llm-tutorials-and-courses)\n  - [LLM Books](#llm-books)\n  - [Great thoughts about LLM](#great-thoughts-about-llm)\n  - [Miscellaneous](#miscellaneous)\n\n## Milestone Papers\n\n\u003Cdetails>\n\n\u003Csummary> milestone papers \u003C\u002Fsummary>\n  \n|   Date  |       keywords       |      Institute     |                                                                                                        Paper                                                                                                       |\n|:-------:|:--------------------:|:------------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 2017-06 |     Transformers     |       Google       | [Attention Is All You Need](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03762.pdf)                                                                                                                                                  |\n| 2018-06 |        GPT 1.0       |       OpenAI       | [Improving Language Understanding by Generative Pre-Training](https:\u002F\u002Fwww.cs.ubc.ca\u002F~amuham01\u002FLING530\u002Fpapers\u002Fradford2018improving.pdf)                                                                             |\n| 2018-10 |         BERT         |       Google       | [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https:\u002F\u002Faclanthology.org\u002FN19-1423.pdf)                                                                                          |\n| 2019-02 |        GPT 2.0       |       OpenAI       | [Language Models are Unsupervised Multitask Learners](https:\u002F\u002Fd4mucfpksywv.cloudfront.net\u002Fbetter-language-models\u002Flanguage_models_are_unsupervised_multitask_learners.pdf)                                          |\n| 2019-09 |      Megatron-LM     |       NVIDIA       | [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.08053.pdf)                                                                                      |\n| 2019-10 |          T5          |       Google       | [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv21\u002F20-074.html)                                                                                       |\n| 2019-10 |         ZeRO         |      Microsoft     | [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.02054.pdf)                                                                                                       |\n| 2020-01 |      Scaling Law     |       OpenAI       | [Scaling Laws for Neural Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.08361.pdf)                                                                                                                                    |\n| 2020-05 |        GPT 3.0       |       OpenAI       | [Language models are few-shot learners](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)                                                                                         |\n| 2021-01 |  Switch Transformers |       Google       | [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.03961.pdf)                                                                               |\n| 2021-08 |         Codex        |       OpenAI       | [Evaluating Large Language Models Trained on Code](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.03374.pdf)                                                                                                                           |\n| 2021-08 |   Foundation Models  |      Stanford      | [On the Opportunities and Risks of Foundation Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.07258.pdf)                                                                                                                        |\n| 2021-09 |         FLAN         |       Google       | [Finetuned Language Models are Zero-Shot Learners](https:\u002F\u002Fopenreview.net\u002Fforum?id=gEZrGCozdqR)                                                                                                                    |\n| 2021-10 |          T0          | HuggingFace et al. | [Multitask Prompted Training Enables Zero-Shot Task Generalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.08207)                                                                                                              |\n| 2021-12 |         GLaM         |       Google       | [GLaM: Efficient Scaling of Language Models with Mixture-of-Experts](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.06905.pdf)                                                                                                         |\n| 2021-12 |        WebGPT        |       OpenAI       | [WebGPT: Browser-assisted question-answering with human feedback](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FWebGPT%3A-Browser-assisted-question-answering-with-Nakano-Hilton\u002F2f3efe44083af91cef562c1a3451eee2f8601d22) |\n| 2021-12 |         Retro        |      DeepMind      | [Improving language models by retrieving from trillions of tokens](https:\u002F\u002Fwww.deepmind.com\u002Fpublications\u002Fimproving-language-models-by-retrieving-from-trillions-of-tokens)                                         |\n| 2021-12 |        Gopher        |      DeepMind      | [Scaling Language Models: Methods, Analysis & Insights from Training Gopher](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.11446.pdf)                                                                                                 |\n| 2022-01 |          COT         |       Google       | [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.11903.pdf)                                                                                                      |\n| 2022-01 |         LaMDA        |       Google       | [LaMDA: Language Models for Dialog Applications](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.08239.pdf)                                                                                                                             |\n| 2022-01 |        Minerva       |       Google       | [Solving Quantitative Reasoning Problems with Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.14858)                                                                                                                   |\n| 2022-01 |  Megatron-Turing NLG |  Microsoft&NVIDIA  | [Using Deep and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.11990.pdf)                                                                         |\n| 2022-03 |      InstructGPT     |       OpenAI       | [Training language models to follow instructions with human feedback](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.02155.pdf)                                                                                                        |\n| 2022-04 |         PaLM         |       Google       | [PaLM: Scaling Language Modeling with Pathways](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.02311.pdf)                                                                                                                              |\n| 2022-04 |      Chinchilla      |      DeepMind      | [Training Compute-Optimal Large Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.15556)                             |\n| 2022-05 |          OPT         |        Meta        | [OPT: Open Pre-trained Transformer Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.01068.pdf)                                                                                                                          |\n| 2022-05 |          UL2         |       Google       | [Unifying Language Learning Paradigms](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.05131v1)                                                                                                                                         |\n| 2022-06 |  Emergent Abilities  |       Google       | [Emergent Abilities of Large Language Models](https:\u002F\u002Fopenreview.net\u002Fpdf?id=yzkSU5zdwD)                                                                                                                            |\n| 2022-06 |       BIG-bench      |       Google       | [Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FBIG-bench)                                                                                |\n| 2022-06 |        METALM        |      Microsoft     | [Language Models are General-Purpose Interfaces](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.06336.pdf)                                                                                                                             |\n| 2022-09 |        Sparrow       |      DeepMind      | [Improving alignment of dialogue agents via targeted human judgements](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.14375.pdf)                                                                                                       |\n| 2022-10 |     Flan-T5\u002FPaLM     |       Google       | [Scaling Instruction-Finetuned Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.11416.pdf)                                                                                                                              |\n| 2022-10 |       GLM-130B       |      Tsinghua      | [GLM-130B: An Open Bilingual Pre-trained Model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.02414.pdf)                                                                                                                              |\n| 2022-11 |         HELM         |      Stanford      | [Holistic Evaluation of Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.09110.pdf)                                                                                                                                     |\n| 2022-11 |         BLOOM        |     BigScience     | [BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.05100.pdf)                                                                                                            |\n| 2022-11 |       Galactica      |        Meta        | [Galactica: A Large Language Model for Science](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.09085.pdf)                                                                                                                              |\n| 2022-12 |        OPT-IML       |        Meta        | [OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.12017)                                                                                   |\n| 2023-01 | Flan 2022 Collection |       Google       | [The Flan Collection: Designing Data and Methods for Effective Instruction Tuning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.13688.pdf)                                                                                           |\n| 2023-02 |         LLaMA        |        Meta        | [LLaMA: Open and Efficient Foundation Language Models](https:\u002F\u002Fresearch.facebook.com\u002Fpublications\u002Fllama-open-and-efficient-foundation-language-models\u002F)                                                            |\n| 2023-02 |       Kosmos-1       |      Microsoft     | [Language Is Not All You Need: Aligning Perception with Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14045)                                                                                                         |\n| 2023-03 |        LRU        |       DeepMind       | [Resurrecting Recurrent Neural Networks for Long Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06349)                                                                                                                                          |\n| 2023-03 |        PaLM-E        |       Google       | [PaLM-E: An Embodied Multimodal Language Model](https:\u002F\u002Fpalm-e.github.io)                                                                                                                                          |\n| 2023-03 |         GPT 4        |       OpenAI       | [GPT-4 Technical Report](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fgpt-4)                                                                                                                                                        |\n| 2023-04 |        LLaVA        | UW–Madison&Microsoft | [Visual Instruction Tuning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08485)                                                                                                |\n| 2023-04 |        Pythia        |  EleutherAI et al. | [Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01373)                                                                                                |\n| 2023-05 |       Dromedary      |     CMU et al.     | [Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03047)                                                                                 |\n| 2023-05 |        PaLM 2        |       Google       | [PaLM 2 Technical Report](https:\u002F\u002Fai.google\u002Fstatic\u002Fdocuments\u002Fpalm2techreport.pdf)                                                                                                                                  |\n| 2023-05 |         RWKV         |       Bo Peng      | [RWKV: Reinventing RNNs for the Transformer Era](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13048)                                                                                                                                 |\n| 2023-05 |          DPO         |      Stanford      | [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.18290.pdf)                                                                                             |\n| 2023-05 |          ToT         |  Google&Princeton  | [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.10601.pdf)                                                                                                    |\n| 2023-07 |        LLaMA2       |        Meta        | [Llama 2: Open Foundation and Fine-Tuned Chat Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2307.09288.pdf)                                                                                                                        |\n| 2023-10 |      Mistral 7B      |       Mistral      | [Mistral 7B](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.06825.pdf)                                                                                                                                                                 |\n| 2023-12 |         Mamba        |    CMU&Princeton   | [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.00752)                                                                                                               |\n| 2024-01 |         DeepSeek-v2        |      DeepSeek     | [DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04434)                                                                                                                          |\n| 2024-02 |         OLMo        |      Ai2     | [OLMo: Accelerating the Science of Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.00838) |\n| 2024-05 |         Mamba2        |      CMU&Princeton     | [Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.21060)|\n| 2024-05 |         Llama3        |      Meta     | [The Llama 3 Herd of Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.21783) |\n| 2024-06 |         FineWeb         |      HuggingFace     | [The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17557) |\n| 2024-09 |         OLMoE        |       Ai2     | [OLMoE: Open Mixture-of-Experts Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.02060) |\n| 2024-12 |         Qwen2.5        |      Alibaba     | [Qwen2.5 Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15115) |\n| 2024-12 |         DeepSeek-V3        |      DeepSeek     | [DeepSeek-V3 Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19437v1) |\n| 2025-01 |         DeepSeek-R1        |      DeepSeek     | [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12948) |\n\n\u003C\u002Fdetails>\n\n## Other Papers\n> [!NOTE]\n> If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link:\n\n\u003Cdetails>\n  \u003Csummary> other papers \u003C\u002Fsummary>\n\n- [Awesome-LLM-hallucination](https:\u002F\u002Fgithub.com\u002FLuckyyySTA\u002FAwesome-LLM-hallucination) - LLM hallucination paper list.\n- [awesome-hallucination-detection](https:\u002F\u002Fgithub.com\u002FEdinburghNLP\u002Fawesome-hallucination-detection) - List of papers on hallucination detection in LLMs.\n- [LLMsPracticalGuide](https:\u002F\u002Fgithub.com\u002FMooler0410\u002FLLMsPracticalGuide) - A curated list of practical guide resources of LLMs\n- [Awesome ChatGPT Prompts](https:\u002F\u002Fgithub.com\u002Ff\u002Fawesome-chatgpt-prompts) - A collection of prompt examples to be used with the ChatGPT model.\n- [awesome-chatgpt-prompts-zh](https:\u002F\u002Fgithub.com\u002FPlexPt\u002Fawesome-chatgpt-prompts-zh) - A Chinese collection of prompt examples to be used with the ChatGPT model.\n- [Awesome ChatGPT](https:\u002F\u002Fgithub.com\u002Fhumanloop\u002Fawesome-chatgpt) - Curated list of resources for ChatGPT and GPT-3 from OpenAI.\n- [Chain-of-Thoughts Papers](https:\u002F\u002Fgithub.com\u002FTimothyxxx\u002FChain-of-ThoughtsPapers) -  A trend starts from \"Chain of Thought Prompting Elicits Reasoning in Large Language Models.\n- [Awesome Deliberative Prompting](https:\u002F\u002Fgithub.com\u002Flogikon-ai\u002Fawesome-deliberative-prompting) - How to ask LLMs to produce reliable reasoning and make reason-responsive decisions.\n- [Instruction-Tuning-Papers](https:\u002F\u002Fgithub.com\u002FSinclairCoder\u002FInstruction-Tuning-Papers) - A trend starts from `Natrural-Instruction` (ACL 2022), `FLAN` (ICLR 2022) and `T0` (ICLR 2022).\n- [LLM Reading List](https:\u002F\u002Fgithub.com\u002Fcrazyofapple\u002FReading_groups\u002F) - A paper & resource list of large language models.\n- [Reasoning using Language Models](https:\u002F\u002Fgithub.com\u002Fatfortes\u002FLM-Reasoning-Papers) - Collection of papers and resources on Reasoning using Language Models.\n- [Chain-of-Thought Hub](https:\u002F\u002Fgithub.com\u002FFranxYao\u002Fchain-of-thought-hub) - Measuring LLMs' Reasoning Performance\n- [Awesome GPT](https:\u002F\u002Fgithub.com\u002Fformulahendry\u002Fawesome-gpt) - A curated list of awesome projects and resources related to GPT, ChatGPT, OpenAI, LLM, and more.\n- [Awesome GPT-3](https:\u002F\u002Fgithub.com\u002Felyase\u002Fawesome-gpt3) - a collection of demos and articles about the [OpenAI GPT-3 API](https:\u002F\u002Fopenai.com\u002Fblog\u002Fopenai-api\u002F).\n- [Awesome LLM Human Preference Datasets](https:\u002F\u002Fgithub.com\u002FPolisAI\u002Fawesome-llm-human-preference-datasets) - a collection of human preference datasets for LLM instruction tuning, RLHF and evaluation.\n- [RWKV-howto](https:\u002F\u002Fgithub.com\u002FHannibal046\u002FRWKV-howto) - possibly useful materials and tutorial for learning RWKV.\n- [ModelEditingPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FModelEditingPapers) - A paper & resource list on model editing for large language models.\n- [Awesome LLM Security](https:\u002F\u002Fgithub.com\u002Fcorca-ai\u002Fawesome-llm-security) - A curation of awesome tools, documents and projects about LLM Security.\n- [Awesome-Align-LLM-Human](https:\u002F\u002Fgithub.com\u002FGaryYufei\u002FAlignLLMHumanSurvey) - A collection of papers and resources about aligning large language models (LLMs) with human.\n- [Awesome-Code-LLM](https:\u002F\u002Fgithub.com\u002Fhuybery\u002FAwesome-Code-LLM) - An awesome and curated list of best code-LLM for research.\n- [Awesome-LLM-Compression](https:\u002F\u002Fgithub.com\u002FHuangOwen\u002FAwesome-LLM-Compression) - Awesome LLM compression research papers and tools.\n- [Awesome-LLM-Systems](https:\u002F\u002Fgithub.com\u002FAmberLJC\u002FLLMSys-PaperList) - Awesome LLM systems research papers.\n- [awesome-llm-webapps](https:\u002F\u002Fgithub.com\u002Fsnowfort-ai\u002Fawesome-llm-webapps) - A collection of open source, actively maintained web apps for LLM applications.\n- [awesome-japanese-llm](https:\u002F\u002Fgithub.com\u002Fllm-jp\u002Fawesome-japanese-llm) - 日本語LLMまとめ - Overview of Japanese LLMs.\n- [Awesome-LLM-Healthcare](https:\u002F\u002Fgithub.com\u002Fmingze-yuan\u002FAwesome-LLM-Healthcare) - The paper list of the review on LLMs in medicine.\n- [Awesome-LLM-Inference](https:\u002F\u002Fgithub.com\u002FDefTruth\u002FAwesome-LLM-Inference) - A curated list of Awesome LLM Inference Paper with codes.\n- [Awesome-LLM-3D](https:\u002F\u002Fgithub.com\u002FActiveVisionLab\u002FAwesome-LLM-3D) - A curated list of Multi-modal Large Language Model in 3D world, including 3D understanding, reasoning, generation, and embodied agents.\n- [LLMDatahub](https:\u002F\u002Fgithub.com\u002FZjh-819\u002FLLMDataHub) - a curated collection of datasets specifically designed for chatbot training, including links, size, language, usage, and a brief description of each dataset\n- [Awesome-Chinese-LLM](https:\u002F\u002Fgithub.com\u002FHqWu-HITCS\u002FAwesome-Chinese-LLM) - 整理开源的中文大语言模型，以规模较小、可私有化部署、训练成本较低的模型为主，包括底座模型，垂直领域微调及应用，数据集与教程等。\n\n- [LLM4Opt](https:\u002F\u002Fgithub.com\u002FFeiLiu36\u002FLLM4Opt) - Applying Large language models (LLMs) for diverse optimization tasks (Opt) is an emerging research area. This is a collection of references and papers of LLM4Opt.\n\n- [awesome-language-model-analysis](https:\u002F\u002Fgithub.com\u002FFuryton\u002Fawesome-language-model-analysis) - This paper list focuses on the theoretical or empirical analysis of language models, e.g., the learning dynamics, expressive capacity, interpretability, generalization, and other interesting topics.\n  \n\u003C\u002Fdetails>\n\n## LLM Leaderboard\n- [Chatbot Arena Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flmsys\u002Fchatbot-arena-leaderboard) - a benchmark platform for large language models (LLMs) that features anonymous, randomized battles in a crowdsourced manner.\n- [LiveBench](https:\u002F\u002Flivebench.ai\u002F#\u002F) - A Challenging, Contamination-Free LLM Benchmark.\n- [Open LLM Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fopen-llm-leaderboard\u002Fopen_llm_leaderboard) - aims to track, rank, and evaluate LLMs and chatbots as they are released.\n- [AlpacaEval](https:\u002F\u002Ftatsu-lab.github.io\u002Falpaca_eval\u002F) - An Automatic Evaluator for Instruction-following Language Models using Nous benchmark suite.\n\u003Cdetails>\n  \u003Csummary> other leaderboards \u003C\u002Fsummary>\n\n- [ACLUE](https:\u002F\u002Fgithub.com\u002Fisen-zhang\u002FACLUE) - an evaluation benchmark focused on ancient Chinese language comprehension. \n- [BeHonest](https:\u002F\u002Fgair-nlp.github.io\u002FBeHonest\u002F#leaderboard) - A pioneering benchmark specifically designed to assess honesty in LLMs comprehensively. \n- [Berkeley Function-Calling Leaderboard](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html) - evaluates LLM's ability to call external functions\u002Ftools.\n- [Chinese Large Model Leaderboard](https:\u002F\u002Fgithub.com\u002Fjeinlee1991\u002Fchinese-llm-benchmark) - an expert-driven benchmark for Chineses LLMs.\n- [CompassRank](https:\u002F\u002Frank.opencompass.org.cn) - CompassRank is dedicated to exploring the most advanced language and visual models, offering a comprehensive, objective, and neutral evaluation reference for the industry and research.\n- [CompMix](https:\u002F\u002Fqa.mpi-inf.mpg.de\u002Fcompmix) - a benchmark evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes).\n- [DreamBench++](https:\u002F\u002Fdreambenchplus.github.io\u002F#leaderboard) - a benchmark for evaluating the performance of large language models (LLMs) in various tasks related to both textual and visual imagination.\n- [FELM](https:\u002F\u002Fhkust-nlp.github.io\u002Ffelm) - a meta-benchmark that evaluates how well factuality evaluators assess the outputs of large language models (LLMs). \n- [InfiBench](https:\u002F\u002Finfi-coder.github.io\u002Finfibench) - a benchmark designed to evaluate large language models (LLMs) specifically in their ability to answer real-world coding-related questions.\n- [LawBench](https:\u002F\u002Flawbench.opencompass.org.cn\u002Fleaderboard) - a benchmark designed to evaluate large language models in the legal domain.\n- [LLMEval](http:\u002F\u002Fllmeval.com) - focuses on understanding how these models perform in various scenarios and analyzing results from an interpretability perspective. \n- [M3CoT](https:\u002F\u002Flightchen233.github.io\u002Fm3cot.github.io\u002Fleaderboard.html) - a benchmark that evaluates large language models on a variety of multimodal reasoning tasks, including language, natural and social sciences, physical and social commonsense, temporal reasoning, algebra, and geometry.\n- [MathEval](https:\u002F\u002Fmatheval.ai) - a comprehensive benchmarking platform designed to evaluate large models' mathematical abilities across 20 fields and nearly 30,000 math problems.\n- [MixEval](https:\u002F\u002Fmixeval.github.io\u002F#leaderboard) - a ground-truth-based dynamic benchmark derived from off-the-shelf benchmark mixtures, which evaluates LLMs with a highly capable model ranking (i.e., 0.96 correlation with Chatbot Arena) while running locally and quickly (6% the time and cost of running MMLU).\n- [MMedBench](https:\u002F\u002Fhenrychur.github.io\u002FMultilingualMedQA) - a benchmark that evaluates large language models' ability to answer medical questions across multiple languages. \n- [MMToM-QA](https:\u002F\u002Fchuanyangjin.com\u002Fmmtom-qa-leaderboard) - a multimodal question-answering benchmark designed to evaluate AI models' cognitive ability to understand human beliefs and goals.\n- [OlympicArena](https:\u002F\u002Fgair-nlp.github.io\u002FOlympicArena\u002F#leaderboard) - a benchmark for evaluating AI models across multiple academic disciplines like math, physics, chemistry, biology, and more.\n- [PubMedQA](https:\u002F\u002Fpubmedqa.github.io) - a biomedical question-answering benchmark designed for answering research-related questions using PubMed abstracts.\n- [SciBench](https:\u002F\u002Fscibench-ucla.github.io\u002F#leaderboard) -  benchmark designed to evaluate large language models (LLMs) on solving complex, college-level scientific problems from domains like chemistry, physics, and mathematics.\n- [SuperBench](https:\u002F\u002Ffm.ai.tsinghua.edu.cn\u002Fsuperbench\u002F#\u002Fleaderboard) - a benchmark platform designed for evaluating large language models (LLMs) on a range of tasks, particularly focusing on their performance in different aspects such as natural language understanding, reasoning, and generalization. \n- [SuperLim](https:\u002F\u002Flab.kb.se\u002Fleaderboard\u002Fresults) - a Swedish language understanding benchmark that evaluates natural language processing (NLP) models on various tasks such as argumentation analysis, semantic similarity, and textual entailment.\n- [TAT-DQA](https:\u002F\u002Fnextplusplus.github.io\u002FTAT-DQA) - a large-scale Document Visual Question Answering (VQA) dataset designed for complex document understanding, particularly in financial reports.\n- [TAT-QA](https:\u002F\u002Fnextplusplus.github.io\u002FTAT-QA) - a large-scale question-answering benchmark focused on real-world financial data, integrating both tabular and textual information.\n- [VisualWebArena](https:\u002F\u002Fjykoh.com\u002Fvwa) - a benchmark designed to assess the performance of multimodal web agents on realistic visually grounded tasks.\n- [We-Math](https:\u002F\u002Fwe-math.github.io\u002F#leaderboard) - a benchmark that evaluates large multimodal models (LMMs) on their ability to perform human-like mathematical reasoning.\n- [WHOOPS!](https:\u002F\u002Fwhoops-benchmark.github.io) - a benchmark dataset testing AI's ability to reason about visual commonsense through images that defy normal expectations.\n\n\u003C\u002Fdetails>\n\n\n## Open LLM\n\u003Cdetails>\n\u003Csummary>DeepSeek\u003C\u002Fsummary>\n  \n  - [DeepSeek-Math-7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-math-65f2962739da11599e441681)\n  - [DeepSeek-Coder-1.3|6.7|7|33B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-coder-65f295d7d8a0a29fe39b4ec4)\n  - [DeepSeek-VL-1.3|7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-vl-65f295948133d9cf92b706d3)\n  - [DeepSeek-MoE-16B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-moe-65f29679f5cf26fe063686bf)\n  - [DeepSeek-v2-236B-MoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04434)\n  - [DeepSeek-Coder-v2-16|236B-MOE](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-Coder-V2)\n  - [DeepSeek-V2.5](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002FDeepSeek-V2.5)\n  - [DeepSeek-V3](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-V3)\n  - [DeepSeek-R1](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Alibaba\u003C\u002Fsummary>\n\n  - [Qwen-1.8B|7B|14B|72B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FQwen\u002Fqwen-65c0e50c3f1ab89cb8704144)\n  - [Qwen1.5-0.5B|1.8B|4B|7B|14B|32B|72B|110B|MoE-A2.7B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen1.5\u002F)\n  - [Qwen2-0.5B|1.5B|7B|57B-A14B-MoE|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2)\n  - [Qwen2.5-0.5B|1.5B|3B|7B|14B|32B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5\u002F)\n  - [CodeQwen1.5-7B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fcodeqwen1.5\u002F)\n  - [Qwen2.5-Coder-1.5B|7B|32B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-coder\u002F)\n  - [Qwen2-Math-1.5B|7B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2-math\u002F)\n  - [Qwen2.5-Math-1.5B|7B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-math\u002F)\n  - [Qwen-VL-7B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen-VL)\n  - [Qwen2-VL-2B|7B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2-vl\u002F)\n  - [Qwen2-Audio-7B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2-audio\u002F)\n  - [Qwen2.5-VL-3|7|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-vl\u002F)\n  - [Qwen2.5-1M-7|14B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-1m\u002F)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Meta\u003C\u002Fsummary>\n\n  - [Llama 3.2-1|3|11|90B](https:\u002F\u002Fllama.meta.com\u002F)\n  - [Llama 3.1-8|70|405B](https:\u002F\u002Fllama.meta.com\u002F)\n  - [Llama 3-8|70B](https:\u002F\u002Fllama.meta.com\u002Fllama3\u002F)\n  - [Llama 2-7|13|70B](https:\u002F\u002Fllama.meta.com\u002Fllama2\u002F)\n  - [Llama 1-7|13|33|65B](https:\u002F\u002Fai.facebook.com\u002Fblog\u002Flarge-language-model-llama-meta-ai\u002F)\n  - [OPT-1.3|6.7|13|30|66B](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.01068)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Mistral AI\u003C\u002Fsummary>\n\n  - [Codestral-7|22B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fcodestral\u002F)\n  - [Mistral-7B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fannouncing-mistral-7b\u002F)\n  - [Mixtral-8x7B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fmixtral-of-experts\u002F)\n  - [Mixtral-8x22B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fmixtral-8x22b\u002F)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Google\u003C\u002Fsummary>\n\n  - [Gemma2-9|27B](https:\u002F\u002Fblog.google\u002Ftechnology\u002Fdevelopers\u002Fgoogle-gemma-2\u002F)\n  - [Gemma-2|7B](https:\u002F\u002Fblog.google\u002Ftechnology\u002Fdevelopers\u002Fgemma-open-models\u002F)\n  - [RecurrentGemma-2B](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frecurrentgemma)\n  - [T5](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.10683)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Apple\u003C\u002Fsummary>\n\n  - [OpenELM-1.1|3B](https:\u002F\u002Fhuggingface.co\u002Fapple\u002FOpenELM)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Microsoft\u003C\u002Fsummary>\n\n  - [Phi1-1.3B](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fphi-1)\n  - [Phi2-2.7B](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fphi-2)\n  - [Phi3-3.8|7|14B](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FPhi-3-mini-4k-instruct)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>AllenAI\u003C\u002Fsummary>\n\n  - [OLMo-7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fallenai\u002Folmo-suite-65aeaae8fe5b6b2122b46778)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>xAI\u003C\u002Fsummary>\n\n  - [Grok-1-314B-MoE](https:\u002F\u002Fx.ai\u002Fblog\u002Fgrok-os)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Cohere\u003C\u002Fsummary>\n\n  - [Command R-35B](https:\u002F\u002Fhuggingface.co\u002FCohereForAI\u002Fc4ai-command-r-v01)\n\n\u003C\u002Fdetails>\n\n\n\n\n\u003Cdetails>\n\u003Csummary>01-ai\u003C\u002Fsummary>\n\n  - [Yi-34B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002F01-ai\u002Fyi-2023-11-663f3f19119ff712e176720f)\n  - [Yi1.5-6|9|34B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002F01-ai\u002Fyi-15-2024-05-663f3ecab5f815a3eaca7ca8)\n  - [Yi-VL-6B|34B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002F01-ai\u002Fyi-vl-663f557228538eae745769f3)\n\n\u003C\u002Fdetails>\n \n \n\u003Cdetails>\n\u003Csummary>Baichuan\u003C\u002Fsummary>\n\n   - [Baichuan-7|13B](https:\u002F\u002Fhuggingface.co\u002Fbaichuan-inc)\n   - [Baichuan2-7|13B](https:\u002F\u002Fhuggingface.co\u002Fbaichuan-inc)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Nvidia\u003C\u002Fsummary>\n\n   - [Nemotron-4-340B](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002FNemotron-4-340B-Instruct)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>BLOOM\u003C\u002Fsummary>\n\n   - [BLOOMZ&mT0](https:\u002F\u002Fhuggingface.co\u002Fbigscience\u002Fbloomz)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Zhipu AI\u003C\u002Fsummary>\n\n   - [GLM-2|6|10|13|70B](https:\u002F\u002Fhuggingface.co\u002FTHUDM)\n   - [CogVLM2-19B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FTHUDM\u002Fcogvlm2-6645f36a29948b67dc4eef75)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>OpenBMB\u003C\u002Fsummary>\n\n  - [MiniCPM-2B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fopenbmb\u002Fminicpm-2b-65d48bf958302b9fd25b698f)\n  - [OmniLLM-12B](https:\u002F\u002Fhuggingface.co\u002Fopenbmb\u002FOmniLMM-12B)\n  - [VisCPM-10B](https:\u002F\u002Fhuggingface.co\u002Fopenbmb\u002FVisCPM-Chat)\n  - [CPM-Bee-1|2|5|10B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fopenbmb\u002Fcpm-bee-65d491cc84fc93350d789361)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>RWKV Foundation\u003C\u002Fsummary>\n\n  - [RWKV-v4|5|6](https:\u002F\u002Fhuggingface.co\u002FRWKV)minicpm-2b-65d48bf958302b9fd25b698f)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>ElutherAI\u003C\u002Fsummary>\n\n  - [Pythia-1|1.4|2.8|6.9|12B](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Fpythia)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Stability AI\u003C\u002Fsummary>\n\n  - [StableLM-3B](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstablelm-3b-4e1t)\n  - [StableLM-v2-1.6B](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstablelm-2-1_6b)\n  - [StableLM-v2-12B](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstablelm-2-12b)\n  - [StableCode-3B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fstabilityai\u002Fstable-code-64f9dfb4ebc8a1be0a3f7650)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>BigCode\u003C\u002Fsummary>\n\n  - [StarCoder-1|3|7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fbigcode\u002F%E2%AD%90-starcoder-64f9bd5740eb5daaeb81dbec)\n  - [StarCoder2-3|7|15B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fbigcode\u002Fstarcoder2-65de6da6e87db3383572be1a)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>DataBricks\u003C\u002Fsummary>\n\n  - [MPT-7B](https:\u002F\u002Fwww.databricks.com\u002Fblog\u002Fmpt-7b)\n  - [DBRX-132B-MoE](https:\u002F\u002Fwww.databricks.com\u002Fblog\u002Fintroducing-dbrx-new-state-art-open-llm)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Shanghai AI Laboratory\u003C\u002Fsummary>\n  \n  - [InternLM2-1.8|7|20B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Finternlm\u002Finternlm2-65b0ce04970888799707893c)\n  - [InternLM-Math-7B|20B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Finternlm\u002Finternlm2-math-65b0ce88bf7d3327d0a5ad9f)\n  - [InternLM-XComposer2-1.8|7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Finternlm\u002Finternlm-xcomposer2-65b3706bf5d76208998e7477)\n  - [InternVL-2|6|14|26](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FOpenGVLab\u002Finternvl-65b92d6be81c86166ca0dde4)\n\n    \n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Moonshot AI\u003C\u002Fsummary>\n  \n  - [Moonlight-A3B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmoonshotai\u002Fmoonlight-a3b-67f67b029cecfdce34f4dc23)\n  - [Kimi-VL-A3B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmoonshotai\u002Fkimi-vl-a3b-67f67b6ac91d3b03d382dd85)\n  - [Kimi-K2](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmoonshotai\u002Fkimi-k2-6871243b990f2af5ba60617d)\n    \n\u003C\u002Fdetails>\n\n\n## LLM Data\n> Reference: [LLMDataHub](https:\u002F\u002Fgithub.com\u002FZjh-819\u002FLLMDataHub)\n- [IBM data-prep-kit](https:\u002F\u002Fgithub.com\u002FIBM\u002Fdata-prep-kit) - Open-Source Toolkit for Efficient Unstructured Data Processing with Pre-built Modules and Local to Cluster Scalability.\n- [Datatrove](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdatatrove) - Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.\n- [Dingo](https:\u002F\u002Fgithub.com\u002FDataEval\u002Fdingo) - Dingo: A Comprehensive Data Quality Evaluation Tool\n- [FastDatasets](https:\u002F\u002Fgithub.com\u002FZhuLinsen\u002FFastDatasets) - A powerful tool for creating high-quality training datasets for Large Language Models\n\n## LLM Evaluation:\n- [lm-evaluation-harness](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness) - A framework for few-shot evaluation of language models.\n- [lighteval](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Flighteval) - a lightweight LLM evaluation suite that Hugging Face has been using internally.\n- [simple-evals](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fsimple-evals) - Eval tools by OpenAI.\n\n\u003Cdetails>\n\u003Csummary>other evaluation frameworks\u003C\u002Fsummary>\n\n- [OLMO-eval](https:\u002F\u002Fgithub.com\u002Fallenai\u002FOLMo-Eval) - a repository for evaluating open language models.\n- [MixEval](https:\u002F\u002Fgithub.com\u002FPsycoy\u002FMixEval) - A reliable click-and-go evaluation suite compatible with both open-source and proprietary models, supporting MixEval and other benchmarks.\n- [HELM](https:\u002F\u002Fgithub.com\u002Fstanford-crfm\u002Fhelm) - Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models.\n- [instruct-eval](https:\u002F\u002Fgithub.com\u002Fdeclare-lab\u002Finstruct-eval) - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.\n- [Giskard](https:\u002F\u002Fgithub.com\u002FGiskard-AI\u002Fgiskard) - Testing & evaluation library for LLM applications, in particular RAGs\n- [LangSmith](https:\u002F\u002Fwww.langchain.com\u002Flangsmith) - a unified platform from LangChain framework for: evaluation, collaboration HITL (Human In The Loop), logging and monitoring LLM applications.  \n- [Ragas](https:\u002F\u002Fgithub.com\u002Fexplodinggradients\u002Fragas) - a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines.\n\n\u003C\u002Fdetails>\n\n\n\n## LLM Training Frameworks\n\n- [Meta Lingua](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Flingua) - a lean, efficient, and easy-to-hack codebase to research LLMs.\n- [Litgpt](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Flitgpt) - 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.\n- [nanotron](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fnanotron) - Minimalistic large language model 3D-parallelism training.\n- [DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed) - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.\n- [Megatron-LM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMegatron-LM) - Ongoing research training transformer models at scale.\n- [torchtitan](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftorchtitan) - A native PyTorch Library for large model training.\n\n\u003Cdetails>\n\u003Csummary>other frameworks\u003C\u002Fsummary>\n\n  - [Megatron-DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FMegatron-DeepSpeed) - DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others. \n  - [torchtune](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftorchtune) - A Native-PyTorch Library for LLM Fine-tuning.\n  - [ROLL](https:\u002F\u002Fgithub.com\u002Falibaba\u002FROLL) - An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models.\n  - [veRL](https:\u002F\u002Fgithub.com\u002Fvolcengine\u002Fverl) - veRL is a flexible and efficient RL framework for LLMs.\n  - [NeMo Framework](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNeMo) - Generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains.\n  - [Colossal-AI](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI) - Making large AI models cheaper, faster, and more accessible.\n  - [BMTrain](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FBMTrain) - Efficient Training for Big Models.\n  - [Mesh Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmesh) - Mesh TensorFlow: Model Parallelism Made Easier.\n  - [maxtext](https:\u002F\u002Fgithub.com\u002FAI-Hypercomputer\u002Fmaxtext) - A simple, performant and scalable Jax LLM!\n  - [GPT-NeoX](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Fgpt-neox) - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.\n  - [Transformer Engine](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTransformerEngine) - A library for accelerating Transformer model training on NVIDIA GPUs.\n  - [OpenRLHF](https:\u002F\u002Fgithub.com\u002FOpenRLHF\u002FOpenRLHF) - An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT).\n  - [TRL](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftrl\u002Fen\u002Findex) - TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step.\n  - [unslothai](https:\u002F\u002Fgithub.com\u002Funslothai\u002Funsloth) - A framework that specializes in efficient fine-tuning. On its GitHub page, you can find ready-to-use fine-tuning templates for various LLMs, allowing you to easily train your own data for free on the Google Colab cloud.\n  - [Axolotl](https:\u002F\u002Fgithub.com\u002Faxolotl-ai-cloud\u002Faxolotl) - Open-source framework for fine-tuning and evaluating LLMs. It simplifies the process of experimenting with different training configurations and makes it easy to reproduce and share results, supporting features like LoRA, QLoRA, DeepSpeed, PEFT, and multi-GPU setups.\n\n\u003C\u002Fdetails>\n\n\n## LLM Inference\n\n> Reference: [llm-inference-solutions](https:\u002F\u002Fgithub.com\u002Fmani-kantap\u002Fllm-inference-solutions)\n- [SGLang](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang) - SGLang is a fast serving framework for large language models and vision language models.\n- [vLLM](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm) - A high-throughput and memory-efficient inference and serving engine for LLMs.\n- [llama.cpp](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fllama.cpp) - LLM inference in C\u002FC++.\n- [ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama) - Get up and running with Llama 3, Mistral, Gemma, and other large language models.\n- [TGI](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftext-generation-inference\u002Fen\u002Findex) - a toolkit for deploying and serving Large Language Models (LLMs).\n- [TensorRT-LLM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM) - Nvidia Framework for LLM Inference\n\u003Cdetails>\n\u003Csummary>other deployment tools\u003C\u002Fsummary>\n\n- [FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer) - NVIDIA Framework for LLM Inference(Transitioned to TensorRT-LLM)\n- [MInference](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FMInference) - To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up to 10x for pre-filling on an A100 while maintaining accuracy.\n- [exllama](https:\u002F\u002Fgithub.com\u002Fturboderp\u002Fexllama) - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.\n- [FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat) - A distributed multi-model LLM serving system with web UI and OpenAI-compatible RESTful APIs.\n- [mistral.rs](https:\u002F\u002Fgithub.com\u002FEricLBuehler\u002Fmistral.rs) - Blazingly fast LLM inference.\n- [SkyPilot](https:\u002F\u002Fgithub.com\u002Fskypilot-org\u002Fskypilot) - Run LLMs and batch jobs on any cloud. Get maximum cost savings, highest GPU availability, and managed execution -- all with a simple interface.\n- [Haystack](https:\u002F\u002Fhaystack.deepset.ai\u002F) - an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data. \n- [OpenLLM](https:\u002F\u002Fgithub.com\u002Fbentoml\u002FOpenLLM) - Fine-tune, serve, deploy, and monitor any open-source LLMs in production. Used in production at [BentoML](https:\u002F\u002Fbentoml.com\u002F) for LLMs-based applications.\n- [DeepSpeed-Mii](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed-MII) -  MII makes low-latency and high-throughput inference, similar to vLLM powered by DeepSpeed.\n- [Text-Embeddings-Inference](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftext-embeddings-inference) - Inference for text-embeddings in Rust, HFOIL Licence.\n- [Infinity](https:\u002F\u002Fgithub.com\u002Fmichaelfeil\u002Finfinity) - Inference for text-embeddings in Python\n- [LMDeploy](https:\u002F\u002Fgithub.com\u002FInternLM\u002Flmdeploy) - A high-throughput and low-latency inference and serving framework for LLMs and VLs\n- [Liger-Kernel](https:\u002F\u002Fgithub.com\u002Flinkedin\u002FLiger-Kernel) - Efficient Triton Kernels for LLM Training.\n- [prima.cpp](https:\u002F\u002Fgithub.com\u002FLizonghang\u002Fprima.cpp) - A distributed implementation of llama.cpp that lets you run 70B-level LLMs on your everyday devices.\n- [deploy-llms-with-ansible](https:\u002F\u002Fgithub.com\u002Fxamey\u002Fdeploy-llms-with-ansible) - Easily deploy any LLM on a VM with minimal configuration, using Ansible.\n\n\u003C\u002Fdetails>\n\n\n## LLM Applications\n> Reference: [awesome-llm-apps](https:\u002F\u002Fgithub.com\u002FShubhamsaboo\u002Fawesome-llm-apps)\n- [dspy](https:\u002F\u002Fgithub.com\u002Fstanfordnlp\u002Fdspy) - DSPy: The framework for programming—not prompting—foundation models.\n- [LangChain](https:\u002F\u002Fgithub.com\u002Fhwchase17\u002Flangchain) — A popular Python\u002FJavaScript library for chaining sequences of language model prompts.\n- [LlamaIndex](https:\u002F\u002Fgithub.com\u002Fjerryjliu\u002Fllama_index) — A Python library for augmenting LLM apps with data.\n\n\u003Cdetails>\n\u003Csummary>more applications\u003C\u002Fsummary>\n\n\n- [MLflow](https:\u002F\u002Fmlflow.org\u002F) - MLflow: An open-source framework for the end-to-end machine learning lifecycle, helping developers track experiments, evaluate models\u002Fprompts, deploy models, and add observability with tracing.\n- [Swiss Army Llama](https:\u002F\u002Fgithub.com\u002FDicklesworthstone\u002Fswiss_army_llama) - Comprehensive set of tools for working with local LLMs for various tasks.\n- [LiteChain](https:\u002F\u002Fgithub.com\u002Frogeriochaves\u002Flitechain) - Lightweight alternative to LangChain for composing LLMs \n- [magentic](https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic) - Seamlessly integrate LLMs as Python functions\n- [wechat-chatgpt](https:\u002F\u002Fgithub.com\u002Ffuergaosi233\u002Fwechat-chatgpt) - Use ChatGPT On Wechat via wechaty\n- [promptfoo](https:\u002F\u002Fgithub.com\u002Ftyppo\u002Fpromptfoo) - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.\n- [Agenta](https:\u002F\u002Fgithub.com\u002Fagenta-ai\u002Fagenta) -  Easily build, version, evaluate and deploy your LLM-powered apps.\n- [Serge](https:\u002F\u002Fgithub.com\u002Fserge-chat\u002Fserge) - a chat interface crafted with llama.cpp for running Alpaca models. No API keys, entirely self-hosted!\n- [Langroid](https:\u002F\u002Fgithub.com\u002Flangroid\u002Flangroid) - Harness LLMs with Multi-Agent Programming\n- [Embedchain](https:\u002F\u002Fgithub.com\u002Fembedchain\u002Fembedchain) - Framework to create ChatGPT like bots over your dataset.\n- [Opik](https:\u002F\u002Fgithub.com\u002Fcomet-ml\u002Fopik) - Confidently evaluate, test, and ship LLM applications with a suite of observability tools to calibrate language model outputs across your dev and production lifecycle.\n- [IntelliServer](https:\u002F\u002Fgithub.com\u002Fintelligentnode\u002FIntelliServer) - simplifies the evaluation of LLMs by providing a unified microservice to access and test multiple AI models.\n- [Langchain-Chatchat](https:\u002F\u002Fgithub.com\u002Fchatchat-space\u002FLangchain-Chatchat) - Formerly langchain-ChatGLM, local knowledge based LLM (like ChatGLM) QA app with langchain.\n- [Search with Lepton](https:\u002F\u002Fgithub.com\u002Fleptonai\u002Fsearch_with_lepton) - Build your own conversational search engine using less than 500 lines of code by [LeptonAI](https:\u002F\u002Fgithub.com\u002Fleptonai).\n- [Robocorp](https:\u002F\u002Fgithub.com\u002Frobocorp\u002Frobocorp) - Create, deploy and operate Actions using Python anywhere to enhance your AI agents and assistants. Batteries included with an extensive set of libraries, helpers and logging.\n- [Tune Studio](https:\u002F\u002Fstudio.tune.app\u002F) - Playground for devs to finetune & deploy LLMs\n- [LLocalSearch](https:\u002F\u002Fgithub.com\u002Fnilsherzig\u002FLLocalSearch) - Locally running websearch using LLM chains\n- [AI Gateway](https:\u002F\u002Fgithub.com\u002FPortkey-AI\u002Fgateway) — Gateway streamlines requests to 100+ open & closed source models with a unified API. It is also production-ready with support for caching, fallbacks, retries, timeouts, loadbalancing, and can be edge-deployed for minimum latency.\n- [talkd.ai dialog](https:\u002F\u002Fgithub.com\u002Ftalkdai\u002Fdialog) - Simple API for deploying any RAG or LLM that you want adding plugins.\n- [Wllama](https:\u002F\u002Fgithub.com\u002Fngxson\u002Fwllama) - WebAssembly binding for llama.cpp - Enabling in-browser LLM inference\n- [GPUStack](https:\u002F\u002Fgithub.com\u002Fgpustack\u002Fgpustack) - An open-source GPU cluster manager for running LLMs\n- [MNN-LLM](https:\u002F\u002Fgithub.com\u002Falibaba\u002FMNN) -- A Device-Inference framework, including LLM Inference on device(Mobile Phone\u002FPC\u002FIOT)\n- [CAMEL](https:\u002F\u002Fwww.camel-ai.org\u002F) - First LLM Multi-agent framework. \n- [QA-Pilot](https:\u002F\u002Fgithub.com\u002Freid41\u002FQA-Pilot) - An interactive chat project that leverages Ollama\u002FOpenAI\u002FMistralAI LLMs for rapid understanding and navigation of GitHub code repository or compressed file resources.\n- [Shell-Pilot](https:\u002F\u002Fgithub.com\u002Freid41\u002Fshell-pilot) - Interact with LLM using Ollama models(or openAI, mistralAI)via pure shell scripts on your Linux(or MacOS) system, enhancing intelligent system management without any dependencies.\n- [MindSQL](https:\u002F\u002Fgithub.com\u002FMindinventory\u002FMindSQL) - A python package for Txt-to-SQL with self hosting functionalities and RESTful APIs compatible with proprietary as well as open source LLM.\n- [Langfuse](https:\u002F\u002Fgithub.com\u002Flangfuse\u002Flangfuse) -  Open Source LLM Engineering Platform 🪢 Tracing, Evaluations, Prompt Management, Evaluations and Playground. \n- [AdalFlow](https:\u002F\u002Fgithub.com\u002FSylphAI-Inc\u002FAdalFlow) - AdalFlow: The library to build&auto-optimize LLM applications.\n- [Guidance](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fguidance) — A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.\n- [Evidently](https:\u002F\u002Fgithub.com\u002Fevidentlyai\u002Fevidently) — An open-source framework to evaluate, test and monitor ML and LLM-powered systems.\n- [Chainlit](https:\u002F\u002Fdocs.chainlit.io\u002Foverview) — A Python library for making chatbot interfaces.\n- [Guardrails.ai](https:\u002F\u002Fwww.guardrailsai.com\u002Fdocs\u002F) — A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.\n- [Semantic Kernel](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fsemantic-kernel) — A Python\u002FC#\u002FJava library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.\n- [Prompttools](https:\u002F\u002Fgithub.com\u002Fhegelai\u002Fprompttools) — Open-source Python tools for testing and evaluating models, vector DBs, and prompts.\n- [Outlines](https:\u002F\u002Fgithub.com\u002Fnormal-computing\u002Foutlines) — A Python library that provides a domain-specific language to simplify prompting and constrain generation.\n- [Promptify](https:\u002F\u002Fgithub.com\u002Fpromptslab\u002FPromptify) — A small Python library for using language models to perform NLP tasks.\n- [Scale Spellbook](https:\u002F\u002Fscale.com\u002Fspellbook) — A paid product for building, comparing, and shipping language model apps.\n- [PromptPerfect](https:\u002F\u002Fpromptperfect.jina.ai\u002Fprompts) — A paid product for testing and improving prompts.\n- [Weights & Biases](https:\u002F\u002Fwandb.ai\u002Fsite\u002Fsolutions\u002Fllmops) — A paid product for tracking model training and prompt engineering experiments.\n- [OpenAI Evals](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fevals) — An open-source library for evaluating task performance of language models and prompts.\n\n- [Arthur Shield](https:\u002F\u002Fwww.arthur.ai\u002Fget-started) — A paid product for detecting toxicity, hallucination, prompt injection, etc.\n- [LMQL](https:\u002F\u002Flmql.ai) — A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.\n- [ModelFusion](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion) - A TypeScript library for building apps with LLMs and other ML models (speech-to-text, text-to-speech, image generation).\n- [OneKE](https:\u002F\u002Fopenspg.yuque.com\u002Fndx6g9\u002Fps5q6b\u002Fvfoi61ks3mqwygvy) — A bilingual Chinese-English knowledge extraction model with knowledge graphs and natural language processing technologies.\n- [llm-ui](https:\u002F\u002Fgithub.com\u002Fllm-ui-kit\u002Fllm-ui) - A React library for building LLM UIs.\n- [Wordware](https:\u002F\u002Fwww.wordware.ai) - A web-hosted IDE where non-technical domain experts work with AI Engineers to build task-specific AI agents. We approach prompting as a new programming language rather than low\u002Fno-code blocks.\n- [Wallaroo.AI](https:\u002F\u002Fgithub.com\u002FWallarooLabs) - Deploy, manage, optimize any model at scale across any environment from cloud to edge. Let's you go from python notebook to inferencing in minutes.\n- [Dify](https:\u002F\u002Fgithub.com\u002Flanggenius\u002Fdify) - An open-source LLM app development platform with an intuitive interface that streamlines AI workflows, model management, and production deployment.\n- [LazyLLM](https:\u002F\u002Fgithub.com\u002FLazyAGI\u002FLazyLLM) - An open-source LLM app for building multi-agent LLMs applications in an easy and lazy way, supports model deployment and fine-tuning.\n- [MemFree](https:\u002F\u002Fgithub.com\u002Fmemfreeme\u002Fmemfree) - Open Source Hybrid AI Search Engine, Instantly Get Accurate Answers from the Internet, Bookmarks, Notes, and Docs. Support One-Click Deployment\n- [AutoRAG](https:\u002F\u002Fgithub.com\u002FMarker-Inc-Korea\u002FAutoRAG) - Open source AutoML tool for RAG. Optimize the RAG answer quality automatically. From generation evaluation datset to deploying optimized RAG pipeline.\n- [Epsilla](https:\u002F\u002Fgithub.com\u002Fepsilla-cloud) - An all-in-one LLM Agent platform with your private data and knowledge, delivers your production-ready AI Agents on Day 1.\n- [Arize-Phoenix](https:\u002F\u002Fphoenix.arize.com\u002F) - Open-source tool for ML observability that runs in your notebook environment. Monitor and fine tune LLM, CV and Tabular Models.\n- [LLM]([https:\u002F\u002Fgithub.com\u002Fsimonw\u002Fllm) - A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.\n- [Just-Chat](https:\u002F\u002Fgithub.com\u002Flongevity-genie\u002Fjust-chat) - Make your LLM agent and chat with it simple and fast!\n- [Agentic Radar](https:\u002F\u002Fgithub.com\u002Fsplx-ai\u002Fagentic-radar) - Open-source CLI security scanner for agentic workflows. Scans your workflow’s source code, detects vulnerabilities, and generates an interactive visualization along with a detailed security report. Supports LangGraph, CrewAI, n8n, OpenAI Agents, and more.\n- [LangWatch](https:\u002F\u002Fgithub.com\u002Flangwatch\u002Flangwatch) - Open-source LLM observability, prompt evaulation, and prompt optimzation platform.\n- [TensorZero](https:\u002F\u002Fwww.tensorzero.com\u002F) - TensorZero is an open-source framework for building production-grade LLM applications. It unifies an LLM gateway, observability, optimization, evaluations, and experimentation.\n\n\u003C\u002Fdetails>\n\n## LLM Tutorials and Courses\n- [Andrej Karpathy Series](https:\u002F\u002Fwww.youtube.com\u002F@AndrejKarpathy) - My favorite!\n- [Umar Jamil Series](https:\u002F\u002Fwww.youtube.com\u002F@umarjamilai) - high quality and educational videos you don't want to miss.\n- [Alexander Rush Series](https:\u002F\u002Frush-nlp.com\u002Fprojects\u002F) - high quality and educational materials you don't want to miss.\n- [llm-course](https:\u002F\u002Fgithub.com\u002Fmlabonne\u002Fllm-course) - Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.\n- [UWaterloo CS 886](https:\u002F\u002Fcs.uwaterloo.ca\u002F~wenhuche\u002Fteaching\u002Fcs886\u002F) - Recent Advances on Foundation Models.\n- [CS25-Transformers United](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs25\u002F)\n- [ChatGPT Prompt Engineering](https:\u002F\u002Fwww.deeplearning.ai\u002Fshort-courses\u002Fchatgpt-prompt-engineering-for-developers\u002F)\n- [Princeton: Understanding Large Language Models](https:\u002F\u002Fwww.cs.princeton.edu\u002Fcourses\u002Farchive\u002Ffall22\u002Fcos597G\u002F)\n- [CS324 - Large Language Models](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2022\u002F)\n- [State of GPT](https:\u002F\u002Fbuild.microsoft.com\u002Fen-US\u002Fsessions\u002Fdb3f4859-cd30-4445-a0cd-553c3304f8e2)\n- [A Visual Guide to Mamba and State Space Models](https:\u002F\u002Fmaartengrootendorst.substack.com\u002Fp\u002Fa-visual-guide-to-mamba-and-state?utm_source=multiple-personal-recommendations-email&utm_medium=email&open=false)\n- [Let's build GPT: from scratch, in code, spelled out.](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kCc8FmEb1nY)\n- [minbpe](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zduSFxRajkE&t=1157s) - Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.\n- [femtoGPT](https:\u002F\u002Fgithub.com\u002Fkeyvank\u002FfemtoGPT) - Pure Rust implementation of a minimal Generative Pretrained Transformer.\n- [Neurips2022-Foundational Robustness of Foundation Models](https:\u002F\u002Fnips.cc\u002Fvirtual\u002F2022\u002Ftutorial\u002F55796)\n- [ICML2022-Welcome to the \"Big Model\" Era: Techniques and Systems to Train and Serve Bigger Models](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2022\u002Ftutorial\u002F18440)\n- [GPT in 60 Lines of NumPy](https:\u002F\u002Fjaykmody.com\u002Fblog\u002Fgpt-from-scratch\u002F)\n- [LLM‑RL‑Visualized (EN)](https:\u002F\u002Fgithub.com\u002Fchangyeyu\u002FLLM-RL-Visualized\u002Fblob\u002Fmaster\u002Fsrc\u002FREADME_EN.md) | [LLM‑RL‑Visualized (中文)](https:\u002F\u002Fgithub.com\u002Fchangyeyu\u002FLLM-RL-Visualized) - 100+  LLM \u002F RL Algorithm Maps📚.\n\n\n## LLM Books\n- [Generative AI with LangChain: Build large language model (LLM) apps with Python, ChatGPT, and other LLMs](https:\u002F\u002Famzn.to\u002F3GUlRng) - it comes with a [GitHub repository](https:\u002F\u002Fgithub.com\u002Fbenman1\u002Fgenerative_ai_with_langchain) that showcases a lot of the functionality\n- [Build a Large Language Model (From Scratch)](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fbuild-a-large-language-model-from-scratch) - A guide to building your own working LLM.\n- [BUILD GPT: HOW AI WORKS](https:\u002F\u002Fwww.amazon.com\u002Fdp\u002F9152799727?ref_=cm_sw_r_cp_ud_dp_W3ZHCD6QWM3DPPC0ARTT_1) - explains how to code a Generative Pre-trained Transformer, or GPT, from scratch.\n- [Hands-On Large Language Models: Language Understanding and Generation](https:\u002F\u002Fwww.llm-book.com\u002F) - Explore the world of Large Language Models with over 275 custom made figures in this illustrated guide!\n- [The Chinese Book for Large Language Models](http:\u002F\u002Faibox.ruc.edu.cn\u002Fzws\u002Findex.htm) - An Introductory LLM Textbook Based on [*A Survey of Large Language Models*](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18223).\n\n## Great thoughts about LLM\n- [Why did all of the public reproduction of GPT-3 fail?](https:\u002F\u002Fjingfengyang.github.io\u002Fgpt)\n- [A Stage Review of Instruction Tuning](https:\u002F\u002Fyaofu.notion.site\u002FJune-2023-A-Stage-Review-of-Instruction-Tuning-f59dbfc36e2d4e12a33443bd6b2012c2)\n- [LLM Powered Autonomous Agents](https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-06-23-agent\u002F)\n- [Why you should work on AI AGENTS!](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fqVLjtvWgq8)\n- [Google \"We Have No Moat, And Neither Does OpenAI\"](https:\u002F\u002Fwww.semianalysis.com\u002Fp\u002Fgoogle-we-have-no-moat-and-neither)\n- [AI competition statement](https:\u002F\u002Fpetergabriel.com\u002Fnews\u002Fai-competition-statement\u002F)\n- [Prompt Engineering](https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-03-15-prompt-engineering\u002F)\n- [Noam Chomsky: The False Promise of ChatGPT](https:\u002F\u002Fwww.nytimes.com\u002F2023\u002F03\u002F08\u002Fopinion\u002Fnoam-chomsky-chatgpt-ai.html)\n- [Is ChatGPT 175 Billion Parameters? Technical Analysis](https:\u002F\u002Forenleung.super.site\u002Fis-chatgpt-175-billion-parameters-technical-analysis)\n- [The Next Generation Of Large Language Models ](https:\u002F\u002Fwww.notion.so\u002FAwesome-LLM-40c8aa3f2b444ecc82b79ae8bbd2696b)\n- [Large Language Model Training in 2023](https:\u002F\u002Fresearch.aimultiple.com\u002Flarge-language-model-training\u002F)\n- [How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources](https:\u002F\u002Fyaofu.notion.site\u002FHow-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1)\n- [Open Pretrained Transformers](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=p9IxoSkvZ-M&t=4s)\n- [Scaling, emergence, and reasoning in large language models](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1EUV7W7X_w0BDrscDhPg7lMGzJCkeaPkGCJ3bN8dluXc\u002Fedit?pli=1&resourcekey=0-7Nz5A7y8JozyVrnDtcEKJA#slide=id.g16197112905_0_0)\n\n## Miscellaneous\n\n\n- [Emergent Mind](https:\u002F\u002Fwww.emergentmind.com) - The latest AI news, curated & explained by GPT-4.\n- [ShareGPT](https:\u002F\u002Fsharegpt.com) - Share your wildest ChatGPT conversations with one click.\n- [Major LLMs + Data Availability](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1bmpDdLZxvTCleLGVPgzoMTQ0iDP2-7v7QziPrzPdHyM\u002Fedit#gid=0)\n- [500+ Best AI Tools](https:\u002F\u002Fvaulted-polonium-23c.notion.site\u002F500-Best-AI-Tools-e954b36bf688404ababf74a13f98d126)\n- [Cohere Summarize Beta](https:\u002F\u002Ftxt.cohere.ai\u002Fsummarize-beta\u002F) - Introducing Cohere Summarize Beta: A New Endpoint for Text Summarization\n- [chatgpt-wrapper](https:\u002F\u002Fgithub.com\u002Fmmabrouk\u002Fchatgpt-wrapper) - ChatGPT Wrapper is an open-source unofficial Python API and CLI that lets you interact with ChatGPT.\n- [Cursor](https:\u002F\u002Fwww.cursor.so) - Write, edit, and chat about your code with a powerful AI.\n- [AutoGPT](https:\u002F\u002Fgithub.com\u002FSignificant-Gravitas\u002FAuto-GPT) - an experimental open-source application showcasing the capabilities of the GPT-4 language model. \n- [OpenAGI](https:\u002F\u002Fgithub.com\u002Fagiresearch\u002FOpenAGI) - When LLM Meets Domain Experts.\n- [EasyEdit](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyEdit) - An easy-to-use framework to edit large language models.\n- [chatgpt-shroud](https:\u002F\u002Fgithub.com\u002FguyShilo\u002Fchatgpt-shroud) - A Chrome extension for OpenAI's ChatGPT, enhancing user privacy by enabling easy hiding and unhiding of chat history. Ideal for privacy during screen shares.\n- [AI For Developers](https:\u002F\u002Faifordevelopers.org) - List of AI Tools and Agents for Developers\n\n## Contributing\n\nThis is an active repository and your contributions are always welcome!\n\nI will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.\n\n---\n\nIf you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.\n\n[^1]: This is not legal advice. Please contact the original authors of the models for more information.\n","# 令人惊叹的大型语言模型 [![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fawesome.re)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHannibal046_Awesome-LLM_readme_9a36c1759d7e.gif)\n\n🔥 大型语言模型（LLM）以迅猛之势席卷了~~自然语言处理社区~~ ~~人工智能社区~~ **整个世界**。这里整理了一份关于大型语言模型的精选论文列表，尤其是与ChatGPT相关的研究。此外，还包含了用于训练LLM的框架、部署LLM的工具、有关LLM的课程和教程，以及所有公开可用的LLM检查点和API。\n\n## 热门LLM项目\n\n- [TinyZero](https:\u002F\u002Fgithub.com\u002FJiayi-Pan\u002FTinyZero) - 清洁、极简、易于访问的DeepSeek R1-Zero复现版本\n- [open-r1](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fopen-r1) - DeepSeek-R1的完全开源复现\n- [DeepSeek-R1](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1) - DeepSeek推出的第一代推理模型。\n- [Qwen2.5-Max](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-max\u002F) - 探索大规模MoE模型的智能潜力。\n- [OpenAI o3-mini](https:\u002F\u002Fopenai.com\u002Findex\u002Fopenai-o3-mini\u002F) - 推动高性价比推理技术的前沿发展。\n- [DeepSeek-V3](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-V3) - 首个开源的GPT-4o级别模型。\n- [Kimi-K2](https:\u002F\u002Fgithub.com\u002FMoonshotAI\u002FKimi-K2) - 具有320亿活跃参数和1万亿总参数的MoE语言模型。\n\n\n## 目录\n- [令人惊叹的大型语言模型](#awesome-llm-)\n  - [里程碑式论文](#milestone-papers)\n  - [其他论文](#other-papers)\n  - [LLM排行榜](#llm-leaderboard)\n  - [开源LLM](#open-llm)\n  - [LLM数据](#llm-data)\n  - [LLM评估](#llm-evaluation)\n  - [LLM训练框架](#llm-training-frameworks)\n  - [LLM推理](#llm-inference)\n  - [LLM应用](#llm-applications)\n  - [LLM教程和课程](#llm-tutorials-and-courses)\n  - [LLM书籍](#llm-books)\n  - [关于LLM的精彩观点](#great-thoughts-about-llm)\n  - [杂项](#miscellaneous)\n\n## 里程碑式论文\n\n\u003Cdetails>\n\n\u003C摘要> 里程碑论文 \u003C\u002F摘要>\n  \n|   日期  |       关键词       |      研究机构     |                                                                                                        论文                                                                                                       |\n|:-------:|:--------------------:|:------------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 2017-06 |     变压器     |       Google       | [注意力就是你所需要的](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03762.pdf)                                                                                                                                                  |\n| 2018-06 |        GPT 1.0       |       OpenAI       | [通过生成式预训练提升语言理解能力](https:\u002F\u002Fwww.cs.ubc.ca\u002F~amuham01\u002FLING530\u002Fpapers\u002Fradford2018improving.pdf)                                                                             |\n| 2018-10 |         BERT         |       Google       | [BERT：面向语言理解的深度双向变压器预训练](https:\u002F\u002Faclanthology.org\u002FN19-1423.pdf)                                                                                          |\n| 2019-02 |        GPT 2.0       |       OpenAI       | [语言模型是无监督的多任务学习者](https:\u002F\u002Fd4mucfpksywv.cloudfront.net\u002Fbetter-language-models\u002Flanguage_models_are_unsupervised_multitask_learners.pdf)                                          |\n| 2019-09 |      Megatron-LM     |       NVIDIA       | [Megatron-LM：使用模型并行训练数千亿参数的语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.08053.pdf)                                                                                      |\n| 2019-10 |          T5          |       Google       | [探索统一文本到文本变换器迁移学习的极限](https:\u002F\u002Fjmlr.org\u002Fpapers\u002Fv21\u002F20-074.html)                                                                                       |\n| 2019-10 |         ZeRO         |      Microsoft     | [ZeRO：面向训练万亿参数模型的内存优化技术](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.02054.pdf)                                                                                                       |\n| 2020-01 |      扩展定律     |       OpenAI       | [神经语言模型的扩展定律](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.08361.pdf)                                                                                                                                    |\n| 2020-05 |        GPT 3.0       |       OpenAI       | [语言模型是少样本学习者](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2020\u002Ffile\u002F1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)                                                                                         |\n| 2021-01 |  Switch Transformers |       Google       | [Switch Transformers：通过简单高效的稀疏性扩展至万亿参数模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.03961.pdf)                                                                               |\n| 2021-08 |         Codex        |       OpenAI       | [对基于代码训练的大规模语言模型的评估](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.03374.pdf)                                                                                                                           |\n| 2021-08 |   基础模型  |      Stanford      | [关于基础模型的机会与风险](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.07258.pdf)                                                                                                                        |\n| 2021-09 |         FLAN         |       Google       | [微调后的语言模型是零样本学习者](https:\u002F\u002Fopenreview.net\u002Fforum?id=gEZrGCozdqR)                                                                                                                    |\n| 2021-10 |          T0          | HuggingFace et al. | [多任务提示训练实现零样本任务泛化](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.08207)                                                                                                              |\n| 2021-12 |         GLaM         |       Google       | [GLaM：利用专家混合高效扩展语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.06905.pdf)                                                                                                         |\n| 2021-12 |        WebGPT        |       OpenAI       | [WebGPT：基于浏览器的人工反馈问答系统](https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FWebGPT%3A-Browser-assisted-question-answering-with-Nakano-Hilton\u002F2f3efe44083af91cef562c1a3451eee2f8601d22) |\n| 2021-12 |         Retro        |      DeepMind      | [通过检索数万亿个标记改进语言模型](https:\u002F\u002Fwww.deepmind.com\u002Fpublications\u002Fimproving-language-models-by-retrieving-from-trillions-of-tokens)                                         |\n| 2021-12 |        Gopher        |      DeepMind      | [语言模型的扩展：训练Gopher的方法、分析与见解](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.11446.pdf)                                                                                                 |\n| 2022-01 |          COT         |       Google       | [思维链提示在大型语言模型中激发推理能力](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.11903.pdf)                                                                                                      |\n| 2022-01 |         LaMDA        |       Google       | [LaMDA：面向对话应用的语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.08239.pdf)                                                                                                                             |\n| 2022-01 |        Minerva       |       Google       | [利用语言模型解决定量推理问题](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.14858)                                                                                                                   |\n| 2022-01 |  Megatron-Turing NLG |  Microsoft&NVIDIA  | [使用Deep和Megatron训练Megatron-Turing NLG 530B，一个大规模生成式语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.11990.pdf)                                                                         |\n| 2022-03 |      InstructGPT     |       OpenAI       | [通过人工反馈训练语言模型遵循指令](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.02155.pdf)                                                                                                        |\n| 2022-04 |         PaLM         |       Google       | [PaLM：利用Pathways扩展语言建模](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.02311.pdf)                                                                                                                              |\n| 2022-04 |      Chinchilla      |      DeepMind      | [训练计算最优的大规模语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.15556)                             |\n| 2022-05 |          OPT         |        Meta        | [OPT：开放的预训练Transformer语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.01068.pdf)                                                                                                                          |\n| 2022-05 |          UL2         |       Google       | [统一语言学习范式](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.05131v1)                                                                                                                                         |\n| 2022-06 |  涌现能力  |       Google       | [大型语言模型的涌现能力](https:\u002F\u002Fopenreview.net\u002Fpdf?id=yzkSU5zdwD)                                                                                                                            |\n| 2022-06 |       BIG-bench      |       Google       | [超越模仿游戏：量化和外推语言模型的能力](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FBIG-bench)                                                                                |\n| 2022-06 |        METALM        |      Microsoft     | [语言模型是通用接口](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.06336.pdf)                                                                                                                             |\n| 2022-09 |        Sparrow       |      DeepMind      | [通过有针对性的人类判断改进对话代理的一致性](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.14375.pdf)                                                                                                       |\n| 2022-10 |     Flan-T5\u002FPaLM     |       Google       | [扩展指令微调后的语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.11416.pdf)                                                                                                                              |\n| 2022-10 |       GLM-130B       |      Tsinghua      | [GLM-130B：一个开放的双语预训练模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.02414.pdf)                                                                                                                              |\n| 2022-11 |         HELM         |      Stanford      | [语言模型的整体评估](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.09110.pdf)                                                                                                                                     |\n| 2022-11 |         BLOOM        |     BigScience     | [BLOOM：一个1760亿参数的开源多语言语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.05100.pdf)                                                                                                            |\n| 2022-11 |       Galactica      |        Meta        | [Galactica：一个用于科学领域的大型语言模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.09085.pdf)                                                                                                                              |\n| 2022-12 |        OPT-IML       |        Meta        | [OPT-IML：从泛化的视角扩展语言模型指令元学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.12017)                                                                                   |\n| 2023-01 | Flan 2022 Collection |       Google       | [Flan系列：为有效指令微调设计数据与方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2301.13688.pdf)                                                                                           |\n| 2023-02 |         LLaMA        |        Meta        | [LLaMA：开放且高效的底层语言模型](https:\u002F\u002Fresearch.facebook.com\u002Fpublications\u002Fllama-open-and-efficient-foundation-language-models\u002F)                                                            |\n| 2023-02 |       Kosmos-1       |      Microsoft     | [语言并非全部所需：将感知与语言模型对齐](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14045)                                                                                                         |\n| 2023-03 |        LRU        |       DeepMind       | [复活循环神经网络以处理长序列](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.06349)                                                                                                                                          |\n| 2023-03 |        PaLM-E        |       Google       | [PaLM-E：一个具身的多模态语言模型](https:\u002F\u002Fpalm-e.github.io)                                                                                                                                          |\n| 2023-03 |         GPT 4        |       OpenAI       | [GPT-4技术报告](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fgpt-4)                                                                                                                                                        |\n| 2023-04 |        LLaVA        | UW–Madison&Microsoft | [视觉指令微调](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08485)                                                                                                |\n| 2023-04 |        Pythia        |  EleutherAI et al. | [Pythia：一套用于分析训练与扩展过程中大型语言模型的工具](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01373)                                                                                                |\n| 2023-05 |       Dromedary      |     CMU et al.     | [基于原则的自对齐语言模型：从零开始，仅需最少的人工监督](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03047)                                                                                 |\n| 2023-05 |        PaLM 2        |       Google       | [PaLM 2技术报告](https:\u002F\u002Fai.google\u002Fstatic\u002Fdocuments\u002Fpalm2techreport.pdf)                                                                                                                                  |\n| 2023-05 |         RWKV         |       Bo Peng      | [RWKV：为Transformer时代重新发明RNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13048)                                                                                                                                 |\n| 2023-05 |          DPO         |      Stanford      | [直接偏好优化：你的语言模型其实是一个奖励模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.18290.pdf)                                                                                             |\n| 2023-05 |          ToT         |  Google&Princeton  | [思维之树：利用大型语言模型进行深思熟虑的问题解决](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.10601.pdf)                                                                                                    |\n| 2023-07 |        LLaMA2       |        Meta        | [Llama 2：开放的基础模型与微调后的聊天模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2307.09288.pdf)                                                                                                                        |\n| 2023-10 |      Mistral 7B      |       Mistral      | [Mistral 7B](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2310.06825.pdf)                                                                                                                                                                 |\n| 2023-12 |         Mamba        |    CMU&Princeton   | [Mamba：利用选择性状态空间实现线性时间序列建模](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.00752.pdf)                                                                                                               |\n| 2024-01 |         DeepSeek-v2        |      DeepSeek     | [DeepSeek-V2：一款强大、经济且高效的专家混合语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04434)                                                                                                                          |\n| 2024-02 |         OLMo        |      Ai2     | [OLMo：加速语言模型科学研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.00838) |\n| 2024-05 |         Mamba2        |      CMU&Princeton     | [Transformer就是SSM：通过结构化状态空间对偶性实现通用模型与高效算法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.21060)|\n| 2024-05 |         Llama3        |      Meta     | [Llama 3系列模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.21783) |\n| 2024-06 |         FineWeb         |      HuggingFace     | [FineWeb数据集：大规模筛选网络以获取最优质文本数据](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.17557) |\n| 2024-09 |         OLMoE        |       Ai2     | [OLMoE：开放的专家混合语言模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.02060) |\n| 2024-12 |         Qwen2.5        |      Alibaba     | [Qwen2.5技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15115) |\n| 2024-12 |         DeepSeek-V3        |      DeepSeek     | [DeepSeek-V3技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.19437v1) |\n| 2025-01 |         DeepSeek-R1        |      DeepSeek     | [DeepSeek-R1：通过强化学习激励LLM的推理能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.12948) |\n\n\u003C\u002Fdetails>\n\n\n\n## 其他论文\n> [!NOTE]\n> 如果你对大语言模型领域感兴趣，上述里程碑论文列表可以帮助你了解该领域的历史和发展现状。然而，大语言模型的每个研究方向都提供了独特的见解和贡献，这些对于全面理解整个领域至关重要。如需各个子领域的详细论文列表，请参阅以下链接：\n\n\u003Cdetails>\n  \u003Csummary>其他论文\u003C\u002Fsummary>\n\n- [Awesome-LLM-hallucination](https:\u002F\u002Fgithub.com\u002FLuckyyySTA\u002FAwesome-LLM-hallucination) - 大语言模型幻觉相关论文列表。\n- [awesome-hallucination-detection](https:\u002F\u002Fgithub.com\u002FEdinburghNLP\u002Fawesome-hallucination-detection) - 大语言模型幻觉检测相关论文列表。\n- [LLMsPracticalGuide](https:\u002F\u002Fgithub.com\u002FMooler0410\u002FLLMsPracticalGuide) - 大语言模型实用指南资源精选列表。\n- [Awesome ChatGPT Prompts](https:\u002F\u002Fgithub.com\u002Ff\u002Fawesome-chatgpt-prompts) - 用于ChatGPT模型的提示词示例集合。\n- [awesome-chatgpt-prompts-zh](https:\u002F\u002Fgithub.com\u002FPlexPt\u002Fawesome-chatgpt-prompts-zh) - 中文版的ChatGPT提示词示例集合。\n- [Awesome ChatGPT](https:\u002F\u002Fgithub.com\u002Fhumanloop\u002Fawesome-chatgpt) - OpenAI旗下ChatGPT和GPT-3相关资源精选列表。\n- [Chain-of-Thoughts Papers](https:\u002F\u002Fgithub.com\u002FTimothyxxx\u002FChain-of-ThoughtsPapers) - 以“Chain of Thought Prompting Elicits Reasoning in Large Language Models”为起点的趋势。\n- [Awesome Deliberative Prompting](https:\u002F\u002Fgithub.com\u002Flogikon-ai\u002Fawesome-deliberative-prompting) - 如何引导大语言模型生成可靠推理并作出基于推理的决策。\n- [Instruction-Tuning-Papers](https:\u002F\u002Fgithub.com\u002FSinclairCoder\u002FInstruction-Tuning-Papers) - 以`Natrural-Instruction`（ACL 2022）、`FLAN`（ICLR 2022）和`T0`（ICLR 2022）为开端的趋势。\n- [LLM Reading List](https:\u002F\u002Fgithub.com\u002Fcrazyofapple\u002FReading_groups\u002F) - 大语言模型相关的论文与资源列表。\n- [Reasoning using Language Models](https:\u002F\u002Fgithub.com\u002Fatfortes\u002FLM-Reasoning-Papers) - 关于利用语言模型进行推理的论文与资源汇编。\n- [Chain-of-Thought Hub](https:\u002F\u002Fgithub.com\u002FFranxYao\u002Fchain-of-thought-hub) - 用于衡量大语言模型推理性能的资源。\n- [Awesome GPT](https:\u002F\u002Fgithub.com\u002Fformulahendry\u002Fawesome-gpt) - 与GPT、ChatGPT、OpenAI、大语言模型等相关的优秀项目和资源精选列表。\n- [Awesome GPT-3](https:\u002F\u002Fgithub.com\u002Felyase\u002Fawesome-gpt3) - 关于[OpenAI GPT-3 API](https:\u002F\u002Fopenai.com\u002Fblog\u002Fopenai-api\u002F)的演示和文章集合。\n- [Awesome LLM Human Preference Datasets](https:\u002F\u002Fgithub.com\u002FPolisAI\u002Fawesome-llm-human-preference-datasets) - 用于大语言模型指令微调、RLHF及评估的人类偏好数据集集合。\n- [RWKV-howto](https:\u002F\u002Fgithub.com\u002FHannibal046\u002FRWKV-howto) - 学习RWKV可能有用的资料和教程。\n- [ModelEditingPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FModelEditingPapers) - 针对大语言模型的模型编辑相关论文与资源列表。\n- [Awesome LLM Security](https:\u002F\u002Fgithub.com\u002Fcorca-ai\u002Fawesome-llm-security) - 大语言模型安全领域的优秀工具、文档和项目精选。\n- [Awesome-Align-LLM-Human](https:\u002F\u002Fgithub.com\u002FGaryYufei\u002FAlignLLMHumanSurvey) - 关于将大语言模型与人类对齐的论文与资源集合。\n- [Awesome-Code-LLM](https:\u002F\u002Fgithub.com\u002Fhuybery\u002FAwesome-Code-LLM) - 研究用的最佳代码大语言模型精选列表。\n- [Awesome-LLM-Compression](https:\u002F\u002Fgithub.com\u002FHuangOwen\u002FAwesome-LLM-Compression) - 大语言模型压缩领域的优秀研究论文和工具。\n- [Awesome-LLM-Systems](https:\u002F\u002Fgithub.com\u002FAmberLJC\u002FLLMSys-PaperList) - 大语言模型系统研究领域的优秀论文列表。\n- [awesome-llm-webapps](https:\u002F\u002Fgithub.com\u002Fsnowfort-ai\u002Fawesome-llm-webapps) - 开源且持续维护的大语言模型应用Web端集合。\n- [awesome-japanese-llm](https:\u002F\u002Fgithub.com\u002Fllm-jp\u002Fawesome-japanese-llm) - 日本语大语言模型综述 - 日本语大语言模型概览。\n- [Awesome-LLM-Healthcare](https:\u002F\u002Fgithub.com\u002Fmingze-yuan\u002FAwesome-LLM-Healthcare) - 医学领域中大语言模型相关综述论文列表。\n- [Awesome-LLM-Inference](https:\u002F\u002Fgithub.com\u002FDefTruth\u002FAwesome-LLM-Inference) - 带有代码的优秀大语言模型推理论文精选列表。\n- [Awesome-LLM-3D](https:\u002F\u002Fgithub.com\u002FActiveVisionLab\u002FAwesome-LLM-3D) - 3D世界中的多模态大语言模型精选列表，涵盖3D理解、推理、生成以及具身智能体等内容。\n- [LLMDatahub](https:\u002F\u002Fgithub.com\u002FZjh-819\u002FLLMDataHub) - 专为聊天机器人训练设计的数据集精选集合，包含每个数据集的链接、规模、语言、用途及简要说明。\n- [Awesome-Chinese-LLM](https:\u002F\u002Fgithub.com\u002FHqWu-HITCS\u002FAwesome-Chinese-LLM) - 整理开源的中文大语言模型，以规模较小、可私有化部署、训练成本较低的模型为主，包括底座模型，垂直领域微调及应用，数据集与教程等。\n\n- [LLM4Opt](https:\u002F\u002Fgithub.com\u002FFeiLiu36\u002FLLM4Opt) - 将大语言模型应用于各类优化任务是一个新兴的研究领域。此列表汇集了LLM4Opt的相关参考文献和论文。\n- [awesome-language-model-analysis](https:\u002F\u002Fgithub.com\u002FFuryton\u002Fawesome-language-model-analysis) - 该论文列表专注于语言模型的理论或实证分析，例如学习动态、表达能力、可解释性、泛化能力等有趣主题。\n  \n\u003C\u002Fdetails>\n\n## 大型语言模型排行榜\n- [Chatbot Arena 排行榜](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flmsys\u002Fchatbot-arena-leaderboard) - 一个用于大型语言模型（LLMs）的基准平台，以众包方式开展匿名、随机对决。\n- [LiveBench](https:\u002F\u002Flivebench.ai\u002F#\u002F) - 一个具有挑战性且无污染的 LLM 基准测试。\n- [Open LLM 排行榜](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fopen-llm-leaderboard\u002Fopen_llm_leaderboard) - 旨在跟踪、排名和评估新发布的 LLM 和聊天机器人。\n- [AlpacaEval](https:\u002F\u002Ftatsu-lab.github.io\u002Falpaca_eval\u002F) - 使用 Nous 基准套件对遵循指令的语言模型进行自动评估的工具。\n\u003Cdetails>\n  \u003Csummary> 其他排行榜 \u003C\u002Fsummary>\n\n- [ACLUE](https:\u002F\u002Fgithub.com\u002Fisen-zhang\u002FACLUE) - 一个专注于古代汉语理解的评估基准。\n- [BeHonest](https:\u002F\u002Fgair-nlp.github.io\u002FBeHonest\u002F#leaderboard) - 一个开创性的基准，专门用于全面评估 LLM 的诚实性。\n- [伯克利函数调用排行榜](https:\u002F\u002Fgorilla.cs.berkeley.edu\u002Fleaderboard.html) - 评估 LLM 调用外部函数或工具的能力。\n- [中文大模型排行榜](https:\u002F\u002Fgithub.com\u002Fjeinlee1991\u002Fchinese-llm-benchmark) - 由专家主导的中文 LLM 基准测试。\n- [CompassRank](https:\u002F\u002Frank.opencompass.org.cn) - CompassRank 致力于探索最先进的语言和视觉模型，为行业和研究提供全面、客观、中立的评估参考。\n- [CompMix](https:\u002F\u002Fqa.mpi-inf.mpg.de\u002Fcompmix) - 一个评估在混合异构输入源（知识库、文本、表格、信息框）上运行的问答方法的基准。\n- [DreamBench++](https:\u002F\u002Fdreambenchplus.github.io\u002F#leaderboard) - 一个用于评估大型语言模型（LLMs）在文本和视觉想象相关任务中表现的基准。\n- [FELM](https:\u002F\u002Fhkust-nlp.github.io\u002Ffelm) - 一个元基准，用于评估事实核查器对大型语言模型（LLMs）输出的评估效果。\n- [InfiBench](https:\u002F\u002Finfi-coder.github.io\u002Finfibench) - 一个专门用于评估大型语言模型（LLMs）回答现实世界编码相关问题能力的基准。\n- [LawBench](https:\u002F\u002Flawbench.opencompass.org.cn\u002Fleaderboard) - 一个用于评估法律领域大型语言模型的基准。\n- [LLMEval](http:\u002F\u002Fllmeval.com) - 专注于理解这些模型在各种场景下的表现，并从可解释性的角度分析结果。\n- [M3CoT](https:\u002F\u002Flightchen233.github.io\u002Fm3cot.github.io\u002Fleaderboard.html) - 一个评估大型语言模型在多种多模态推理任务上的基准，包括语言、自然科学与社会科学、物理与社会常识、时间推理、代数和几何等。\n- [MathEval](https:\u002F\u002Fmatheval.ai) - 一个综合性的基准测试平台，旨在评估大型模型在 20 个领域、近 3 万道数学题目中的数学能力。\n- [MixEval](https:\u002F\u002Fmixeval.github.io\u002F#leaderboard) - 一个基于真实数据的动态基准，源自现成的基准组合，在本地快速运行（耗时和成本仅为 MMLU 的 6%），同时具备极高的模型排名相关性（与 Chatbot Arena 的相关性达 0.96）。\n- [MMedBench](https:\u002F\u002Fhenrychur.github.io\u002FMultilingualMedQA) - 一个评估大型语言模型跨多种语言回答医学问题能力的基准。\n- [MMToM-QA](https:\u002F\u002Fchuanyangjin.com\u002Fmmtom-qa-leaderboard) - 一个多模态问答基准，用于评估 AI 模型理解人类信念和目标的认知能力。\n- [OlympicArena](https:\u002F\u002Fgair-nlp.github.io\u002FOlympicArena\u002F#leaderboard) - 一个用于评估 AI 模型在数学、物理、化学、生物等多个学科领域表现的基准。\n- [PubMedQA](https:\u002F\u002Fpubmedqa.github.io) - 一个生物医学问答基准，专为利用 PubMed 摘要回答科研相关问题而设计。\n- [SciBench](https:\u002F\u002Fscibench-ucla.github.io\u002F#leaderboard) - 一个用于评估大型语言模型（LLMs）解决化学、物理和数学等领域复杂大学水平科学问题的基准。\n- [SuperBench](https:\u002F\u002Ffm.ai.tsinghua.edu.cn\u002Fsuperbench\u002F#\u002Fleaderboard) - 一个用于评估大型语言模型（LLMs）在多项任务上表现的基准平台，尤其关注其在自然语言理解、推理和泛化等方面的能力。\n- [SuperLim](https:\u002F\u002Flab.kb.se\u002Fleaderboard\u002Fresults) - 一个瑞典语理解基准，评估自然语言处理（NLP）模型在论证分析、语义相似度和文本蕴含等多种任务上的表现。\n- [TAT-DQA](https:\u002F\u002Fnextplusplus.github.io\u002FTAT-DQA) - 一个大规模文档视觉问答（VQA）数据集，专为复杂文档理解而设计，尤其是财务报告。\n- [TAT-QA](https:\u002F\u002Fnextplusplus.github.io\u002FTAT-QA) - 一个专注于现实世界金融数据的大规模问答基准，整合了表格和文本信息。\n- [VisualWebArena](https:\u002F\u002Fjykoh.com\u002Fvwa) - 一个用于评估多模态网络代理在真实视觉情境任务中表现的基准。\n- [We-Math](https:\u002F\u002Fwe-math.github.io\u002F#leaderboard) - 一个评估大型多模态模型（LMMs）进行类人数学推理能力的基准。\n- [WHOOPS!](https:\u002F\u002Fwhoops-benchmark.github.io) - 一个基准数据集，通过违背常规预期的图像来测试 AI 对视觉常识的推理能力。\n\n\u003C\u002Fdetails>\n\n## 开源大模型\n\u003Cdetails>\n\u003Csummary>DeepSeek\u003C\u002Fsummary>\n  \n  - [DeepSeek-Math-7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-math-65f2962739da11599e441681)\n  - [DeepSeek-Coder-1.3|6.7|7|33B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-coder-65f295d7d8a0a29fe39b4ec4)\n  - [DeepSeek-VL-1.3|7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-vl-65f295948133d9cf92b706d3)\n  - [DeepSeek-MoE-16B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fdeepseek-ai\u002Fdeepseek-moe-65f29679f5cf26fe063686bf)\n  - [DeepSeek-v2-236B-MoE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.04434)\n  - [DeepSeek-Coder-v2-16|236B-MOE](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-Coder-V2)\n  - [DeepSeek-V2.5](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002FDeepSeek-V2.5)\n  - [DeepSeek-V3](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-V3)\n  - [DeepSeek-R1](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>阿里巴巴\u003C\u002Fsummary>\n\n  - [通义千问-1.8B|7B|14B|72B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FQwen\u002Fqwen-65c0e50c3f1ab89cb8704144)\n  - [Qwen1.5-0.5B|1.8B|4B|7B|14B|32B|72B|110B|MoE-A2.7B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen1.5\u002F)\n  - [Qwen2-0.5B|1.5B|7B|57B-A14B-MoE|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2)\n  - [Qwen2.5-0.5B|1.5B|3B|7B|14B|32B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5\u002F)\n  - [CodeQwen1.5-7B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fcodeqwen1.5\u002F)\n  - [Qwen2.5-Coder-1.5B|7B|32B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-coder\u002F)\n  - [Qwen2-Math-1.5B|7B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2-math\u002F)\n  - [Qwen2.5-Math-1.5B|7B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-math\u002F)\n  - [Qwen-VL-7B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen-VL)\n  - [Qwen2-VL-2B|7B|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2-vl\u002F)\n  - [Qwen2-Audio-7B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2-audio\u002F)\n  - [Qwen2.5-VL-3|7|72B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-vl\u002F)\n  - [Qwen2.5-1M-7|14B](https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen2.5-1m\u002F)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Meta\u003C\u002Fsummary>\n\n  - [Llama 3.2-1|3|11|90B](https:\u002F\u002Fllama.meta.com\u002F)\n  - [Llama 3.1-8|70|405B](https:\u002F\u002Fllama.meta.com\u002F)\n  - [Llama 3-8|70B](https:\u002F\u002Fllama.meta.com\u002Fllama3\u002F)\n  - [Llama 2-7|13|70B](https:\u002F\u002Fllama.meta.com\u002Fllama2\u002F)\n  - [Llama 1-7|13|33|65B](https:\u002F\u002Fai.facebook.com\u002Fblog\u002Flarge-language-model-llama-meta-ai\u002F)\n  - [OPT-1.3|6.7|13|30|66B](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.01068)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Mistral AI\u003C\u002Fsummary>\n\n  - [Codestral-7|22B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fcodestral\u002F)\n  - [Mistral-7B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fannouncing-mistral-7b\u002F)\n  - [Mixtral-8x7B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fmixtral-of-experts\u002F)\n  - [Mixtral-8x22B](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fmixtral-8x22b\u002F)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>谷歌\u003C\u002Fsummary>\n\n  - [Gemma2-9|27B](https:\u002F\u002Fblog.google\u002Ftechnology\u002Fdevelopers\u002Fgoogle-gemma-2\u002F)\n  - [Gemma-2|7B](https:\u002F\u002Fblog.google\u002Ftechnology\u002Fdevelopers\u002Fgemma-open-models\u002F)\n  - [RecurrentGemma-2B](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Frecurrentgemma)\n  - [T5](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.10683)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>苹果\u003C\u002Fsummary>\n\n  - [OpenELM-1.1|3B](https:\u002F\u002Fhuggingface.co\u002Fapple\u002FOpenELM)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>微软\u003C\u002Fsummary>\n\n  - [Phi1-1.3B](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fphi-1)\n  - [Phi2-2.7B](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002Fphi-2)\n  - [Phi3-3.8|7|14B](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FPhi-3-mini-4k-instruct)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>AllenAI\u003C\u002Fsummary>\n\n  - [OLMo-7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fallenai\u002Folmo-suite-65aeaae8fe5b6b2122b46778)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>xAI\u003C\u002Fsummary>\n\n  - [Grok-1-314B-MoE](https:\u002F\u002Fx.ai\u002Fblog\u002Fgrok-os)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Cohere\u003C\u002Fsummary>\n\n  - [Command R-35B](https:\u002F\u002Fhuggingface.co\u002FCohereForAI\u002Fc4ai-command-r-v01)\n\n\u003C\u002Fdetails>\n\n\n\n\n\u003Cdetails>\n\u003Csummary>01-ai\u003C\u002Fsummary>\n\n  - [Yi-34B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002F01-ai\u002Fyi-2023-11-663f3f19119ff712e176720f)\n  - [Yi1.5-6|9|34B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002F01-ai\u002Fyi-15-2024-05-663f3ecab5f815a3eaca7ca8)\n  - [Yi-VL-6B|34B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002F01-ai\u002Fyi-vl-663f557228538eae745769f3)\n\n\u003C\u002Fdetails>\n \n \n\u003Cdetails>\n\u003Csummary>百川智能\u003C\u002Fsummary>\n\n   - [Baichuan-7|13B](https:\u002F\u002Fhuggingface.co\u002Fbaichuan-inc)\n   - [Baichuan2-7|13B](https:\u002F\u002Fhuggingface.co\u002Fbaichuan-inc)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Nvidia\u003C\u002Fsummary>\n\n   - [Nemotron-4-340B](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002FNemotron-4-340B-Instruct)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>BLOOM\u003C\u002Fsummary>\n\n   - [BLOOMZ&mT0](https:\u002F\u002Fhuggingface.co\u002Fbigscience\u002Fbloomz)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>智谱AI\u003C\u002Fsummary>\n\n   - [GLM-2|6|10|13|70B](https:\u002F\u002Fhuggingface.co\u002FTHUDM)\n   - [CogVLM2-19B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FTHUDM\u002Fcogvlm2-6645f36a29948b67dc4eef75)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>OpenBMB\u003C\u002Fsummary>\n\n  - [MiniCPM-2B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fopenbmb\u002Fminicpm-2b-65d48bf958302b9fd25b698f)\n  - [OmniLLM-12B](https:\u002F\u002Fhuggingface.co\u002Fopenbmb\u002FOmniLMM-12B)\n  - [VisCPM-10B](https:\u002F\u002Fhuggingface.co\u002Fopenbmb\u002FVisCPM-Chat)\n  - [CPM-Bee-1|2|5|10B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fopenbmb\u002Fcpm-bee-65d491cc84fc93350d789361)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>RWKV基金会\u003C\u002Fsummary>\n\n  - [RWKV-v4|5|6](https:\u002F\u002Fhuggingface.co\u002FRWKV)minicpm-2b-65d48bf958302b9fd25b698f)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>ElutherAI\u003C\u002Fsummary>\n\n  - [Pythia-1|1.4|2.8|6.9|12B](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Fpythia)\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Stability AI\u003C\u002Fsummary>\n\n  - [StableLM-3B](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstablelm-3b-4e1t)\n  - [StableLM-v2-1.6B](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstablelm-2-1_6b)\n  - [StableLM-v2-12B](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstablelm-2-12b)\n  - [StableCode-3B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fstabilityai\u002Fstable-code-64f9dfb4ebc8a1be0a3f7650)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>BigCode\u003C\u002Fsummary>\n\n  - [StarCoder-1|3|7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fbigcode\u002F%E2%AD%90-starcoder-64f9bd5740eb5daaeb81dbec)\n  - [StarCoder2-3|7|15B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fbigcode\u002Fstarcoder2-65de6da6e87db3383572be1a)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>DataBricks\u003C\u002Fsummary>\n\n  - [MPT-7B](https:\u002F\u002Fwww.databricks.com\u002Fblog\u002Fmpt-7b)\n  - [DBRX-132B-MoE](https:\u002F\u002Fwww.databricks.com\u002Fblog\u002Fintroducing-dbrx-new-state-art-open-llm)\n\n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>上海人工智能实验室\u003C\u002Fsummary>\n  \n  - [InternLM2-1.8|7|20B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Finternlm\u002Finternlm2-65b0ce04970888799707893c)\n  - [InternLM-Math-7B|20B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Finternlm\u002Finternlm2-math-65b0ce88bf7d3327d0a5ad9f)\n  - [InternLM-XComposer2-1.8|7B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Finternlm\u002Finternlm-xcomposer2-65b3706bf5d76208998e7477)\n  - [InternVL-2|6|14|26](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FOpenGVLab\u002Finternvl-65b92d6be81c86166ca0dde4)\n\n    \n\u003C\u002Fdetails>\n\u003Cdetails>\n\u003Csummary>Moonshot AI\u003C\u002Fsummary>\n  \n  - [Moonlight-A3B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmoonshotai\u002Fmoonlight-a3b-67f67b029cecfdce34f4dc23)\n  - [Kimi-VL-A3B](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmoonshotai\u002Fkimi-vl-a3b-67f67b6ac91d3b03d382dd85)\n  - [Kimi-K2](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmoonshotai\u002Fkimi-k2-6871243b990f2af5ba60617d)\n    \n\u003C\u002Fdetails>\n\n## LLM 数据\n> 参考：[LLMDataHub](https:\u002F\u002Fgithub.com\u002FZjh-819\u002FLLMDataHub)\n- [IBM data-prep-kit](https:\u002F\u002Fgithub.com\u002FIBM\u002Fdata-prep-kit) - 一个开源工具包，用于高效处理非结构化数据，提供预构建模块，并支持从本地到集群的可扩展性。\n- [Datatrove](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdatatrove) - 通过提供一组与平台无关的可定制流水线处理模块，将数据处理从繁琐的脚本编写中解放出来。\n- [Dingo](https:\u002F\u002Fgithub.com\u002FDataEval\u002Fdingo) - Dingo：一款全面的数据质量评估工具\n- [FastDatasets](https:\u002F\u002Fgithub.com\u002FZhuLinsen\u002FFastDatasets) - 一个强大的工具，用于为大型语言模型创建高质量的训练数据集\n\n## LLM 评估：\n- [lm-evaluation-harness](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness) - 一个用于对语言模型进行少样本评估的框架。\n- [lighteval](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Flighteval) - Hugging Face 内部一直在使用的轻量级 LLM 评估套件。\n- [simple-evals](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fsimple-evals) - OpenAI 提供的评估工具。\n\n\u003Cdetails>\n\u003Csummary>其他评估框架\u003C\u002Fsummary>\n\n- [OLMO-eval](https:\u002F\u002Fgithub.com\u002Fallenai\u002FOLMo-Eval) - 一个用于评估开放语言模型的仓库。\n- [MixEval](https:\u002F\u002Fgithub.com\u002FPsycoy\u002FMixEval) - 一个可靠、开箱即用的评估套件，兼容开源和专有模型，支持 MixEval 等基准测试。\n- [HELM](https:\u002F\u002Fgithub.com\u002Fstanford-crfm\u002Fhelm) - 语言模型综合评估（HELM），一个旨在提高语言模型透明度的框架。\n- [instruct-eval](https:\u002F\u002Fgithub.com\u002Fdeclare-lab\u002Finstruct-eval) - 此仓库包含用于定量评估 Alpaca 和 Flan-T5 等指令微调模型在未见任务上表现的代码。\n- [Giskard](https:\u002F\u002Fgithub.com\u002FGiskard-AI\u002Fgiskard) - 针对 LLM 应用程序，尤其是 RAG 的测试与评估库。\n- [LangSmith](https:\u002F\u002Fwww.langchain.com\u002Flangsmith) - LangChain 框架提供的统一平台，用于评估、协作（HITL）、日志记录和监控 LLM 应用程序。\n- [Ragas](https:\u002F\u002Fgithub.com\u002Fexplodinggradients\u002Fragas) - 一个帮助您评估检索增强生成（RAG）管道的框架。\n\n\u003C\u002Fdetails>\n\n\n\n## LLM 训练框架\n\n- [Meta Lingua](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Flingua) - 一个精简、高效且易于修改的代码库，用于研究 LLM。\n- [Litgpt](https:\u002F\u002Fgithub.com\u002FLightning-AI\u002Flitgpt) - 包含 20 多种高性能 LLM，附带大规模预训练、微调和部署的配方。\n- [nanotron](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fnanotron) - 极简主义的大规模语言模型 3D 并行训练。\n- [DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed) - DeepSpeed 是一个深度学习优化库，使分布式训练和推理变得简单、高效且富有成效。\n- [Megatron-LM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FMegatron-LM) - 持续研究大规模 Transformer 模型的训练。\n- [torchtitan](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftorchtitan) - 一个原生 PyTorch 库，用于大规模模型训练。\n\n\u003Cdetails>\n\u003Csummary>其他框架\u003C\u002Fsummary>\n\n  - [Megatron-DeepSpeed](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FMegatron-DeepSpeed) - NVIDIA Megatron-LM 的 DeepSpeed 版本，增加了对 MoE 模型训练、课程学习、3D 并行等特性的额外支持。\n  - [torchtune](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ftorchtune) - 一个原生 PyTorch 库，用于 LLM 微调。\n  - [ROLL](https:\u002F\u002Fgithub.com\u002Falibaba\u002FROLL) - 一个高效且易用的强化学习缩放库，适用于大型语言模型。\n  - [veRL](https:\u002F\u002Fgithub.com\u002Fvolcengine\u002Fverl) - veRL 是一个灵活高效的 LLM 强化学习框架。\n  - [NeMo Framework](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FNeMo) - 一个面向研究人员和 PyTorch 开发者的生成式 AI 框架，专注于大型语言模型（LLMs）、多模态模型（MMs）、自动语音识别（ASR）、文本转语音（TTS）以及计算机视觉（CV）等领域。\n  - [Colossal-AI](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI) - 使大型 AI 模型更便宜、更快、更易获取。\n  - [BMTrain](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FBMTrain) - 大模型的高效训练。\n  - [Mesh Tensorflow](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmesh) - Mesh TensorFlow：让模型并行更容易。\n  - [maxtext](https:\u002F\u002Fgithub.com\u002FAI-Hypercomputer\u002Fmaxtext) - 一个简单、高效且可扩展的 Jax LLM！\n  - [GPT-NeoX](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Fgpt-neox) - 基于 DeepSpeed 库，在 GPU 上实现的模型并行自回归 Transformer 实现。\n  - [Transformer Engine](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTransformerEngine) - 一个用于加速在 NVIDIA GPU 上训练 Transformer 模型的库。\n  - [OpenRLHF](https:\u002F\u002Fgithub.com\u002FOpenRLHF\u002FOpenRLHF) - 一个易于使用、可扩展且高性能的 RLHF 框架（70B+ PPO 全量微调、迭代 DPO、LoRA、RingAttention 和 RFT）。\n  - [TRL](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftrl\u002Fen\u002Findex) - TRL 是一个全栈库，我们提供一系列工具，用于通过强化学习训练 Transformer 语言模型，涵盖监督微调（SFT）、奖励建模（RM）以及近端策略优化（PPO）等步骤。\n  - [unslothai](https:\u002F\u002Fgithub.com\u002Funslothai\u002Funsloth) - 一个专门从事高效微调的框架。在其 GitHub 页面上，您可以找到适用于各种 LLM 的即用型微调模板，让您能够轻松地在 Google Colab 云端免费使用自己的数据进行训练。\n  - [Axolotl](https:\u002F\u002Fgithub.com\u002Faxolotl-ai-cloud\u002Faxolotl) - 一个开源的 LLM 微调和评估框架。它简化了尝试不同训练配置的过程，便于结果的复现和分享，支持 LoRA、QLoRA、DeepSpeed、PEFT 以及多 GPU 配置等功能。\n\n\u003C\u002Fdetails>\n\n## 大语言模型推理\n\n> 参考：[llm-inference-solutions](https:\u002F\u002Fgithub.com\u002Fmani-kantap\u002Fllm-inference-solutions)\n- [SGLang](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang) - SGLang 是一个用于大型语言模型和视觉语言模型的快速推理框架。\n- [vLLM](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm) - 一个高吞吐量、内存高效的 LLM 推理与服务引擎。\n- [llama.cpp](https:\u002F\u002Fgithub.com\u002Fggerganov\u002Fllama.cpp) - 使用 C\u002FC++ 进行 LLM 推理。\n- [ollama](https:\u002F\u002Fgithub.com\u002Follama\u002Follama) - 快速启动并运行 Llama 3、Mistral、Gemma 等大型语言模型。\n- [TGI](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftext-generation-inference\u002Fen\u002Findex) - 用于部署和提供大型语言模型（LLMs）服务的工具包。\n- [TensorRT-LLM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM) - NVIDIA 提供的 LLM 推理框架\n\u003Cdetails>\n\u003Csummary>其他部署工具\u003C\u002Fsummary>\n\n- [FasterTransformer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FFasterTransformer) - NVIDIA 的 LLM 推理框架（已过渡到 TensorRT-LLM）\n- [MInference](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FMInference) - 通过近似计算和动态稀疏注意力机制加速长上下文 LLM 的推理，在 A100 上预填充阶段可将推理延迟降低多达 10 倍，同时保持精度。\n- [exllama](https:\u002F\u002Fgithub.com\u002Fturboderp\u002Fexllama) - 针对量化权重优化的 Llama 模型实现，内存效率更高。\n- [FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat) - 一个具有 Web UI 和 OpenAI 兼容 RESTful API 的分布式多模型 LLM 服务系统。\n- [mistral.rs](https:\u002F\u002Fgithub.com\u002FEricLBuehler\u002Fmistral.rs) - 极速的 LLM 推理。\n- [SkyPilot](https:\u002F\u002Fgithub.com\u002Fskypilot-org\u002Fskypilot) - 在任何云平台上运行 LLM 和批处理任务。通过简单的界面实现最大成本节约、最高 GPU 可用性以及托管式执行。\n- [Haystack](https:\u002F\u002Fhaystack.deepset.ai\u002F) - 一个开源 NLP 框架，允许您使用来自 Hugging Face、OpenAI 和 Cohere 的 LLM 和基于 Transformer 的模型与自有数据进行交互。\n- [OpenLLM](https:\u002F\u002Fgithub.com\u002Fbentoml\u002FOpenLLM) - 用于在生产环境中微调、部署、服务和监控任何开源 LLM。已在 [BentoML](https:\u002F\u002Fbentoml.com\u002F) 的 LLM 相关应用中投入生产使用。\n- [DeepSpeed-MII](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FDeepSpeed-MII) - MII 提供低延迟、高吞吐量的推理能力，类似于由 DeepSpeed 支持的 vLLM。\n- [Text-Embeddings-Inference](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftext-embeddings-inference) - Rust 实现的文本嵌入推理工具，采用 HFOIL 许可证。\n- [Infinity](https:\u002F\u002Fgithub.com\u002Fmichaelfeil\u002Finfinity) - Python 实现的文本嵌入推理工具。\n- [LMDeploy](https:\u002F\u002Fgithub.com\u002FInternLM\u002Flmdeploy) - 一个高吞吐量、低延迟的 LLM 和 VL 推理与服务框架。\n- [Liger-Kernel](https:\u002F\u002Fgithub.com\u002Flinkedin\u002FLiger-Kernel) - 高效的 Triton 内核，用于 LLM 训练。\n- [prima.cpp](https:\u002F\u002Fgithub.com\u002FLizonghang\u002Fprima.cpp) - llama.cpp 的分布式实现，可在日常设备上运行 700 亿参数级别的 LLM。\n- [deploy-llms-with-ansible](https:\u002F\u002Fgithub.com\u002Fxamey\u002Fdeploy-llms-with-ansible) - 使用 Ansible，只需少量配置即可轻松在虚拟机上部署任意 LLM。\n\n\u003C\u002Fdetails>\n\n\n## LLM 应用\n> 参考：[awesome-llm-apps](https:\u002F\u002Fgithub.com\u002FShubhamsaboo\u002Fawesome-llm-apps)\n- [dspy](https:\u002F\u002Fgithub.com\u002Fstanfordnlp\u002Fdspy) - DSPy：用于编程基础模型而非提示词的框架。\n- [LangChain](https:\u002F\u002Fgithub.com\u002Fhwchase17\u002Flangchain) — 一个流行的 Python\u002FJavaScript 库，用于串联语言模型的提示序列。\n- [LlamaIndex](https:\u002F\u002Fgithub.com\u002Fjerryjliu\u002Fllama_index) — 一个 Python 库，用于通过数据增强 LLM 应用。\n\n\u003Cdetails>\n\u003Csummary>更多应用\u003C\u002Fsummary>\n\n- [MLflow](https:\u002F\u002Fmlflow.org\u002F) - MLflow：一个用于端到端机器学习生命周期的开源框架，帮助开发者跟踪实验、评估模型\u002F提示词、部署模型，并通过追踪功能增强可观ility。\n- [Swiss Army Llama](https:\u002F\u002Fgithub.com\u002FDicklesworthstone\u002Fswiss_army_llama) - 一套全面的工具集，用于在本地运行LLM并完成各种任务。\n- [LiteChain](https:\u002F\u002Fgithub.com\u002Frogeriochaves\u002Flitechain) - LangChain的轻量级替代方案，用于组合LLM。\n- [magentic](https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic) - 将LLM无缝集成为Python函数。\n- [wechat-chatgpt](https:\u002F\u002Fgithub.com\u002Ffuergaosi233\u002Fwechat-chatgpt) - 通过Wechaty在微信上使用ChatGPT。\n- [promptfoo](https:\u002F\u002Fgithub.com\u002Ftyppo\u002Fpromptfoo) - 测试你的提示词。评估和比较LLM的输出，捕捉回归问题，并提升提示词质量。\n- [Agenta](https:\u002F\u002Fgithub.com\u002Fagenta-ai\u002Fagenta) - 轻松构建、版本化、评估和部署基于LLM的应用程序。\n- [Serge](https:\u002F\u002Fgithub.com\u002Fserge-chat\u002Fserge) - 基于llama.cpp打造的聊天界面，用于运行Alpaca模型。无需API密钥，完全自托管！\n- [Langroid](https:\u002F\u002Fgithub.com\u002Flangroid\u002Flangroid) - 使用多智能体编程驾驭LLM。\n- [Embedchain](https:\u002F\u002Fgithub.com\u002Fembedchain\u002Fembedchain) - 一个框架，可在你的数据集上创建类似ChatGPT的聊天机器人。\n- [Opik](https:\u002F\u002Fgithub.com\u002Fcomet-ml\u002Fopik) - 通过一系列可观ability工具，自信地评估、测试并交付LLM应用，以校准开发和生产周期中语言模型的输出。\n- [IntelliServer](https:\u002F\u002Fgithub.com\u002Fintelligentnode\u002FIntelliServer) - 通过提供统一的微服务来访问和测试多种AI模型，简化LLM的评估流程。\n- [Langchain-Chatchat](https:\u002F\u002Fgithub.com\u002Fchatchat-space\u002FLangchain-Chatchat) - 前身为langchain-ChatGLM，是一款结合Langchain的本地知识型LLM问答应用（如ChatGLM）。\n- [Search with Lepton](https:\u002F\u002Fgithub.com\u002Fleptonai\u002Fsearch_with_lepton) - 使用不到500行代码，由[LeptonAI](https:\u002F\u002Fgithub.com\u002Fleptonai)构建属于你自己的对话式搜索引擎。\n- [Robocorp](https:\u002F\u002Fgithub.com\u002Frobocorp\u002Frobocorp) - 在任何地方使用Python创建、部署和运行Actions，以增强你的AI代理和助手。内置了丰富的库、辅助工具和日志记录功能。\n- [Tune Studio](https:\u002F\u002Fstudio.tune.app\u002F) - 开发者用于微调和部署LLM的游乐场。\n- [LLocalSearch](https:\u002F\u002Fgithub.com\u002Fnilsherzig\u002FLLocalSearch) - 使用LLM链路在本地运行网络搜索。\n- [AI Gateway](https:\u002F\u002Fgithub.com\u002FPortkey-AI\u002Fgateway) — Gateway通过统一的API简化对100多种开源和闭源模型的请求。它已具备生产就绪条件，支持缓存、回退、重试、超时、负载均衡等功能，并可部署在边缘以实现最低延迟。\n- [talkd.ai dialog](https:\u002F\u002Fgithub.com\u002Ftalkdai\u002Fdialog) - 一个简单的API，用于部署任何RAG或LLM，并可添加插件。\n- [Wllama](https:\u002F\u002Fgithub.com\u002Fngxson\u002Fwllama) - llama.cpp的WebAssembly绑定，支持在浏览器中进行LLM推理。\n- [GPUStack](https:\u002F\u002Fgithub.com\u002Fgpustack\u002Fgpustack) - 一个开源的GPU集群管理器，用于运行LLM。\n- [MNN-LLM](https:\u002F\u002Fgithub.com\u002Falibaba\u002FMNN) -- 一个设备端推理框架，支持在设备上（手机\u002FPC\u002FIoT）进行LLM推理。\n- [CAMEL](https:\u002F\u002Fwww.camel-ai.org\u002F) - 首个LLM多智能体框架。\n- [QA-Pilot](https:\u002F\u002Fgithub.com\u002Freid41\u002FQA-Pilot) - 一个交互式聊天项目，利用Ollama\u002FOpenAI\u002FMistralAI的LLM快速理解并导航GitHub代码仓库或压缩文件资源。\n- [Shell-Pilot](https:\u002F\u002Fgithub.com\u002Freid41\u002Fshell-pilot) - 在Linux（或MacOS）系统上通过纯Shell脚本与Ollama模型（或OpenAI、MistralAI）互动，增强智能系统管理能力，且无需任何依赖。\n- [MindSQL](https:\u002F\u002Fgithub.com\u002FMindinventory\u002FMindSQL) - 一个Python包，用于文本转SQL，并具备自托管功能及兼容专有和开源LLM的RESTful API。\n- [Langfuse](https:\u002F\u002Fgithub.com\u002Flangfuse\u002Flangfuse) - 开源LLM工程平台 🪢 支持追踪、评估、提示词管理、评测和游乐场。\n- [AdalFlow](https:\u002F\u002Fgithub.com\u002FSylphAI-Inc\u002FAdalFlow) - AdalFlow：用于构建和自动优化LLM应用的库。\n- [Guidance](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fguidance) — 微软推出的一款便捷Python库，采用Handlebars模板技术，实现生成、提示和逻辑控制的交织。\n- [Evidently](https:\u002F\u002Fgithub.com\u002Fevidentlyai\u002Fevidently) — 一个开源框架，用于评估、测试和监控ML及LLM驱动的系统。\n- [Chainlit](https:\u002F\u002Fdocs.chainlit.io\u002Foverview) — 一个用于构建聊天机器人界面的Python库。\n- [Guardrails.ai](https:\u002F\u002Fwww.guardrailsai.com\u002Fdocs\u002F) — 一个用于验证输出并重试失败的Python库。目前仍处于Alpha阶段，可能存在一些不完善之处。\n- [Semantic Kernel](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fsemantic-kernel) — 微软推出的一套Python\u002FC#\u002FJava库，支持提示词模板、函数链式调用、向量化记忆和智能规划。\n- [Prompttools](https:\u002F\u002Fgithub.com\u002Fhegelai\u002Fprompttools) — 一套开源的Python工具，用于测试和评估模型、向量数据库和提示词。\n- [Outlines](https:\u002F\u002Fgithub.com\u002Fnormal-computing\u002Foutlines) — 一个Python库，提供领域特定语言，以简化提示词设计并约束生成内容。\n- [Promptify](https:\u002F\u002Fgithub.com\u002Fpromptslab\u002FPromptify) — 一个小型Python库，用于利用语言模型执行NLP任务。\n- [Scale Spellbook](https:\u002F\u002Fscale.com\u002Fspellbook) — 一款付费产品，用于构建、比较和部署语言模型应用。\n- [PromptPerfect](https:\u002F\u002Fpromptperfect.jina.ai\u002Fprompts) — 一款付费产品，用于测试和优化提示词。\n- [Weights & Biases](https:\u002F\u002Fwandb.ai\u002Fsite\u002Fsolutions\u002Fllmops) — 一款付费产品，用于跟踪模型训练和提示词工程实验。\n- [OpenAI Evals](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fevals) — 一个开源库，用于评估语言模型和提示词的任务表现。\n\n- [Arthur Shield](https:\u002F\u002Fwww.arthur.ai\u002Fget-started) — 一款付费产品，用于检测毒性内容、幻觉现象、提示注入等问题。\n- [LMQL](https:\u002F\u002Flmql.ai) — 一种用于与大语言模型交互的编程语言，支持类型化提示、控制流、约束条件和工具集成。\n- [ModelFusion](https:\u002F\u002Fgithub.com\u002Flgrammel\u002Fmodelfusion) - 一个基于TypeScript的库，用于构建结合大语言模型及其他机器学习模型（如语音转文本、文本转语音、图像生成）的应用程序。\n- [OneKE](https:\u002F\u002Fopenspg.yuque.com\u002Fndx6g9\u002Fps5q6b\u002Fvfoi61ks3mqwygvy) — 一个中英双语知识抽取模型，结合了知识图谱和自然语言处理技术。\n- [llm-ui](https:\u002F\u002Fgithub.com\u002Fllm-ui-kit\u002Fllm-ui) - 一个用于构建大语言模型用户界面的React库。\n- [Wordware](https:\u002F\u002Fwww.wordware.ai) - 一个基于Web的集成开发环境，非技术领域的专家可以与AI工程师合作，共同构建特定任务的AI代理。我们把提示工程视为一种新的编程语言，而非低代码或无代码的积木式操作。\n- [Wallaroo.AI](https:\u002F\u002Fgithub.com\u002FWallarooLabs) - 可在任何环境中大规模部署、管理和优化各类模型，从云端到边缘端。帮助用户在几分钟内从Python笔记本过渡到推理阶段。\n- [Dify](https:\u002F\u002Fgithub.com\u002Flanggenius\u002Fdify) - 一个开源的大语言模型应用开发平台，提供直观的界面，简化AI工作流程、模型管理和生产部署。\n- [LazyLLM](https:\u002F\u002Fgithub.com\u002FLazyAGI\u002FLazyLLM) - 一个开源的大语言模型应用框架，以简单便捷的方式构建多智能体系统，支持模型部署和微调。\n- [MemFree](https:\u002F\u002Fgithub.com\u002Fmemfreeme\u002Fmemfree) - 开源混合型AI搜索引擎，可即时从互联网、书签、笔记和文档中获取准确答案。支持一键部署。\n- [AutoRAG](https:\u002F\u002Fgithub.com\u002FMarker-Inc-Korea\u002FAutoRAG) - 一个开源的AutoML工具，专为RAG系统设计，能够自动优化RAG的回答质量。从生成评估数据集到部署优化后的RAG流水线，全程自动化。\n- [Epsilla](https:\u002F\u002Fgithub.com\u002Fepsilla-cloud) - 一个一体化的大语言模型代理平台，结合用户私有数据和知识，可在第一天就交付生产就绪的AI代理。\n- [Arize-Phoenix](https:\u002F\u002Fphoenix.arize.com\u002F) - 一个开源的机器学习可观测性工具，可在用户的笔记本环境中运行。可用于监控和微调大语言模型、计算机视觉模型及表格数据模型。\n- [LLM]([https:\u002F\u002Fgithub.com\u002Fsimonw\u002Fllm) - 一个命令行工具和Python库，用于与大型语言模型交互，既可通过远程API使用，也可在本地安装并运行。\n- [Just-Chat](https:\u002F\u002Fgithub.com\u002Flongevity-genie\u002Fjust-chat) - 让你快速简便地创建自己的大语言模型代理并与之对话！\n- [Agentic Radar](https:\u002F\u002Fgithub.com\u002Fsplx-ai\u002Fagentic-radar) - 一个开源的CLI安全扫描工具，专门用于检测智能体工作流中的漏洞，并生成交互式可视化图表及详细的安全报告。支持LangGraph、CrewAI、n8n、OpenAI Agents等框架。\n- [LangWatch](https:\u002F\u002Fgithub.com\u002Flangwatch\u002Flangwatch) - 一个开源的大语言模型可观测性、提示评估和提示优化平台。\n- [TensorZero](https:\u002F\u002Fwww.tensorzero.com\u002F) - TensorZero是一个开源框架，用于构建生产级别的大语言模型应用。它整合了大语言模型网关、可观测性、优化、评估和实验功能。\n\n\u003C\u002Fdetails>\n\n\n\n## 大语言模型教程与课程\n- [Andrej Karpathy系列](https:\u002F\u002Fwww.youtube.com\u002F@AndrejKarpathy) - 我最喜欢的！\n- [Umar Jamil系列](https:\u002F\u002Fwww.youtube.com\u002F@umarjamilai) - 高质量且富有教育意义的视频，不容错过。\n- [Alexander Rush系列](https:\u002F\u002Frush-nlp.com\u002Fprojects\u002F) - 高质量且富有教育意义的资料，值得一看。\n- [llm-course](https:\u002F\u002Fgithub.com\u002Fmlabonne\u002Fllm-course) - 一门关于大型语言模型（LLMs）的入门课程，包含学习路线图和Colab笔记本。\n- [UWaterloo CS 886](https:\u002F\u002Fcs.uwaterloo.ca\u002F~wenhuche\u002Fteaching\u002Fcs886\u002F) - 基础模型的最新进展。\n- [CS25-Transformers United](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs25\u002F)\n- [ChatGPT提示工程](https:\u002F\u002Fwww.deeplearning.ai\u002Fshort-courses\u002Fchatgpt-prompt-engineering-for-developers\u002F)\n- [普林斯顿大学：理解大型语言模型](https:\u002F\u002Fwww.cs.princeton.edu\u002Fcourses\u002Farchive\u002Ffall22\u002Fcos597G\u002F)\n- [CS324 - 大型语言模型](https:\u002F\u002Fstanford-cs324.github.io\u002Fwinter2022\u002F)\n- [GPT现状](https:\u002F\u002Fbuild.microsoft.com\u002Fen-US\u002Fsessions\u002Fdb3f4859-cd30-4445-a0cd-553c3304f8e2)\n- [Mamba与状态空间模型的可视化指南](https:\u002F\u002Fmaartengrootendorst.substack.com\u002Fp\u002Fa-visual-guide-to-mamba-and-state?utm_source=multiple-personal-recommendations-email&utm_medium=email&open=false)\n- [让我们从零开始用代码构建GPT](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kCc8FmEb1nY)\n- [minbpe](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zduSFxRajkE&t=1157s) - 一段极简、清晰的代码，实现了常用于大语言模型分词的字节对编码（BPE）算法。\n- [femtoGPT](https:\u002F\u002Fgithub.com\u002Fkeyvank\u002FfemtoGPT) - 一个纯Rust实现的极简版生成式预训练Transformer模型。\n- [NeurIPS 2022 - 基础模型的稳健性](https:\u002F\u002Fnips.cc\u002Fvirtual\u002F2022\u002Ftutorial\u002F55796)\n- [ICML 2022 - 欢迎进入“大模型”时代：训练与服务更大模型的技术与系统](https:\u002F\u002Ficml.cc\u002Fvirtual\u002F2022\u002Ftutorial\u002F18440)\n- [用60行NumPy代码实现GPT](https:\u002F\u002Fjaykmody.com\u002Fblog\u002Fgpt-from-scratch\u002F)\n- [LLM‑RL‑Visualized (英文)](https:\u002F\u002Fgithub.com\u002Fchangyeyu\u002FLLM-RL-Visualized\u002Fblob\u002Fmaster\u002Fsrc\u002FREADME_EN.md) | [LLM‑RL‑Visualized (中文)](https:\u002F\u002Fgithub.com\u002Fchangyeyu\u002FLLM-RL-Visualized) - 超过100张LLM\u002FRL算法图📚.\n\n\n## 大语言模型相关书籍\n- [使用LangChain构建生成式AI：用Python、ChatGPT及其他大语言模型打造应用](https:\u002F\u002Famzn.to\u002F3GUlRng) - 附带一个[GitHub仓库](https:\u002F\u002Fgithub.com\u002Fbenman1\u002Fgenerative_ai_with_langchain)，展示了大量功能。\n- [从零开始构建大型语言模型](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fbuild-a-large-language-model-from-scratch) - 一本教你如何构建属于自己的可用大语言模型的指南。\n- [构建GPT：AI是如何工作的](https:\u002F\u002Fwww.amazon.com\u002Fdp\u002F9152799727?ref_=cm_sw_r_cp_ud_dp_W3ZHCD6QWM3DPPC0ARTT_1) - 详细解释了如何从零开始编写一个生成式预训练Transformer（即GPT）。\n- [动手实践大型语言模型：语言理解与生成](https:\u002F\u002Fwww.llm-book.com\u002F) - 在这本图文并茂的指南中，通过超过275幅定制插图探索大型语言模型的世界！\n- [中国大型语言模型教材](http:\u002F\u002Faibox.ruc.edu.cn\u002Fzws\u002Findex.htm) - 一本基于[*大型语言模型综述*](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18223)的入门级大语言模型教科书。\n\n## 关于大语言模型的精彩观点\n- [为什么所有公开的GPT-3复现都失败了？](https:\u002F\u002Fjingfengyang.github.io\u002Fgpt)\n- [指令微调阶段性回顾](https:\u002F\u002Fyaofu.notion.site\u002FJune-2023-A-Stage-Review-of-Instruction-Tuning-f59dbfc36e2d4e12a33443bd6b2012c2)\n- [由大语言模型驱动的自主智能体](https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-06-23-agent\u002F)\n- [为什么你应该投身于AI智能体的研究！](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fqVLjtvWgq8)\n- [谷歌：“我们没有护城河，OpenAI也没有”](https:\u002F\u002Fwww.semianalysis.com\u002Fp\u002Fgoogle-we-have-no-moat-and-neither)\n- [AI竞争声明](https:\u002F\u002Fpetergabriel.com\u002Fnews\u002Fai-competition-statement\u002F)\n- [提示工程](https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-03-15-prompt-engineering\u002F)\n- [诺姆·乔姆斯基：ChatGPT的虚假承诺](https:\u002F\u002Fwww.nytimes.com\u002F2023\u002F03\u002F08\u002Fopinion\u002Fnoam-chomsky-chatgpt-ai.html)\n- [ChatGPT真的有1750亿参数吗？技术分析](https:\u002F\u002Forenleung.super.site\u002Fis-chatgpt-175-billion-parameters-technical-analysis)\n- [下一代大型语言模型](https:\u002F\u002Fwww.notion.so\u002FAwesome-LLM-40c8aa3f2b444ecc82b79ae8bbd2696b)\n- [2023年大型语言模型训练](https:\u002F\u002Fresearch.aimultiple.com\u002Flarge-language-model-training\u002F)\n- [GPT是如何获得其能力的？追溯语言模型涌现能力的来源](https:\u002F\u002Fyaofu.notion.site\u002FHow-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1)\n- [开放的预训练Transformer模型](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=p9IxoSkvZ-M&t=4s)\n- [大型语言模型中的规模效应、涌现与推理](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1EUV7W7X_w0BDrscDhPg7lMGzJCkeaPkGCJ3bN8dluXc\u002Fedit?pli=1&resourcekey=0-7Nz5A7y8JozyVrnDtcEKJA#slide=id.g16197112905_0_0)\n\n## 其他\n\n- [Emergent Mind](https:\u002F\u002Fwww.emergentmind.com) - 由GPT-4精选并解读的最新AI资讯。\n- [ShareGPT](https:\u002F\u002Fsharegpt.com) - 一键分享你最疯狂的ChatGPT对话。\n- [主要大语言模型及数据可用性](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1bmpDdLZxvTCleLGVPgzoMTQ0iDP2-7v7QziPrzPdHyM\u002Fedit#gid=0)\n- [500+最佳AI工具](https:\u002F\u002Fvaulted-polonium-23c.notion.site\u002F500-Best-AI-Tools-e954b36bf688404ababf74a13f98d126)\n- [Cohere Summarize Beta](https:\u002F\u002Ftxt.cohere.ai\u002Fsummarize-beta\u002F) - 推出Cohere Summarize Beta：文本摘要的新端点\n- [chatgpt-wrapper](https:\u002F\u002Fgithub.com\u002Fmmabrouk\u002Fchatgpt-wrapper) - ChatGPT Wrapper是一个开源的非官方Python API和CLI，允许你与ChatGPT互动。\n- [Cursor](https:\u002F\u002Fwww.cursor.so) - 使用强大的AI编写、编辑代码并与他人讨论。\n- [AutoGPT](https:\u002F\u002Fgithub.com\u002FSignificant-Gravitas\u002FAuto-GPT) - 一个实验性的开源应用，展示了GPT-4语言模型的能力。\n- [OpenAGI](https:\u002F\u002Fgithub.com\u002Fagiresearch\u002FOpenAGI) - 当大语言模型遇到领域专家时。\n- [EasyEdit](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyEdit) - 一个易于使用的框架，用于编辑大型语言模型。\n- [chatgpt-shroud](https:\u002F\u002Fgithub.com\u002FguyShilo\u002Fchatgpt-shroud) - 一款适用于OpenAI ChatGPT的Chrome扩展程序，通过轻松隐藏和显示聊天记录来增强用户隐私。非常适合在屏幕共享时保护隐私。\n- [面向开发者的AI](https:\u002F\u002Faifordevelopers.org) - 面向开发者的AI工具和智能体列表\n\n## 贡献\n这是一个活跃的仓库，欢迎你的贡献！\n\n如果我对某些PR是否适合LLM感到不确定，我会保持它们开放，你可以通过为它们点赞来投票。\n\n---\n\n如果你对这份主观性强的列表有任何疑问，请随时联系我：chengxin1998@stu.pku.edu.cn。\n\n[^1]: 本内容不构成法律建议。如需更多信息，请联系相关模型的原始作者。","# Awesome-LLM 快速上手指南\n\n**Awesome-LLM** 并非一个可直接安装运行的单一软件工具，而是一个**精选的大语言模型（LLM）资源清单**。它汇集了里程碑论文、开源模型权重、训练框架、推理工具、数据集、评估基准以及教程课程。\n\n本指南将指导开发者如何利用该清单快速定位所需资源，并搭建基础的 LLM 开发环境。\n\n## 1. 环境准备\n\n由于 Awesome-LLM 指向的资源涵盖从论文研究到模型部署的各个环节，建议根据具体需求准备以下基础环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04\u002F22.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **硬件要求**:\n    *   **推理**: 至少 8GB VRAM 的 NVIDIA GPU (运行 7B 参数模型量化版)。\n    *   **微调\u002F训练**: 建议 24GB+ VRAM (如 RTX 3090\u002F4090) 或多卡 A100\u002FH100 环境。\n*   **前置依赖**:\n    *   Python 3.8+\n    *   Git\n    *   CUDA Toolkit (版本需与 PyTorch 匹配，通常建议 11.8 或 12.1+)\n    *   Conda 或 Mamba (推荐用于环境管理)\n\n## 2. 获取资源与安装框架\n\nAwesome-LLM 本身是一个 GitHub 仓库，你可以通过克隆该仓库获取完整的资源索引。随后，根据清单中的推荐，安装主流的 LLM 框架（如 Hugging Face Transformers, vLLM, DeepSpeed 等）。\n\n### 步骤一：克隆 Awesome-LLM 仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHKUNLP\u002FAwesome-LLM.git\ncd Awesome-LLM\n```\n*注：国内访问若受限，可使用镜像源或代理加速。*\n\n### 步骤二：配置国内加速源 (推荐)\n在安装 Python 依赖时，强烈建议使用清华或阿里镜像源以提升下载速度。\n\n```bash\n# 临时使用清华源安装示例包\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple torch transformers accelerate\n```\n\n### 步骤三：安装主流推理\u002F训练框架\n根据清单中 **LLM Inference** 和 **LLM Training Framework** 章节的推荐，选择一个框架进行安装。以下是目前最通用的 **Hugging Face ecosystem** 安装命令：\n\n```bash\n# 创建虚拟环境\nconda create -n llm-env python=3.10 -y\nconda activate llm-env\n\n# 安装 PyTorch (根据CUDA版本选择，此处以CUDA 11.8为例)\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n\n# 安装核心库\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple transformers accelerate sentencepiece protobuf\n```\n\n> **提示**：若需高性能推理，可参考清单安装 `vllm`；若需大规模训练，可参考清单安装 `deepspeed` 或 `megatron-lm`。\n\n## 3. 基本使用示例\n\n利用 Awesome-LLM 清单中 **Open LLM** 章节提供的模型（例如 Qwen2.5 或 Llama 3），结合已安装的 `transformers` 库进行快速推理。\n\n以下是一个加载开源模型并进行对话的最小化示例：\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\n\n# 1. 指定模型名称 (可从 Awesome-LLM 的 Open LLM 列表中选择，如 Qwen\u002FQwen2.5-7B-Instruct)\nmodel_name = \"Qwen\u002FQwen2.5-7B-Instruct\"\n\n# 2. 加载分词器和模型 (自动从 HuggingFace 下载，国内建议配置 HF_ENDPOINT 镜像)\n# export HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com (在终端执行此命令设置镜像)\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_name,\n    torch_dtype=torch.float16,\n    device_map=\"auto\",\n    trust_remote_code=True\n)\n\n# 3. 构建输入\nprompt = \"请简要介绍什么是大语言模型？\"\nmessages = [\n    {\"role\": \"system\", \"content\": \"你是一个有用的助手。\"},\n    {\"role\": \"user\", \"content\": prompt}\n]\ntext = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\ninputs = tokenizer([text], return_tensors=\"pt\").to(model.device)\n\n# 4. 生成回答\noutputs = model.generate(**inputs, max_new_tokens=512)\nresponse = tokenizer.decode(outputs[0], skip_special_tokens=True)\n\nprint(response)\n```\n\n### 下一步行动\n浏览 `Awesome-LLM\u002FREADME.md` 文件中的目录结构，根据你的具体目标深入探索：\n*   想复现最新算法？查看 **Milestone Papers** 和 **Trending LLM Projects**。\n*   想找特定领域数据？查看 **LLM Data**。\n*   想系统学习？查看 **LLM Tutorials and Courses**。","某初创公司算法团队正计划基于开源模型构建垂直领域的智能客服系统，急需筛选合适的基座模型并复现前沿推理能力。\n\n### 没有 Awesome-LLM 时\n- **信息检索低效**：团队成员需在 arXiv、GitHub 和各类技术博客间反复跳转，耗费数天才能拼凑出完整的模型列表，极易遗漏如 DeepSeek-R1 等最新开源项目。\n- **资源验证困难**：难以快速确认哪些模型提供了公开权重或 API，常陷入下载链接失效或文档缺失的困境，导致开发环境搭建多次受阻。\n- **技术选型盲目**：缺乏系统的评测榜单（Leaderboard）和里程碑论文索引，无法客观对比不同架构（如 MoE 与传统 Transformer）的性能差异，选型全靠试错。\n- **学习曲线陡峭**：新手成员找不到结构化的训练框架教程和课程资源，面对复杂的推理部署工具束手无策，拖慢整体研发进度。\n\n### 使用 Awesome-LLM 后\n- **一站式获取资源**：直接通过分类目录锁定\"Open LLM\"和\"Trending Projects\"板块，分钟级内即可找到 TinyZero、Qwen2.5-Max 等高质量候选模型及其代码仓库。\n- **精准定位可用资产**：利用清单中明确标注的 Checkpoints 和 API 信息，迅速跳过不可用项目，直接将 DeepSeek-V3 等成熟模型集成到测试流程中。\n- **科学决策架构**：参考\"LLM Leaderboard\"和\"Milestone Papers\"中的权威数据与经典文献，快速确立以高性价比推理模型为核心的技术路线。\n- **加速团队成长**：成员依托\"LLM Tutorials and Courses\"板块的系统教程，迅速掌握训练框架与部署工具，大幅缩短从理论到落地的周期。\n\nAwesome-LLM 将原本分散杂乱的全球大模型生态整合为有序的知识地图，让研发团队从“大海捞针”转变为“按图索骥”，显著提升了技术落地效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHannibal046_Awesome-LLM_b8bf9d88.png","Hannibal046",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FHannibal046_89229c4a.jpg","seek truth, preserve freedom","https:\u002F\u002Fgithub.com\u002FHannibal046",26604,2424,"2026-04-06T17:12:58","CC0-1.0",1,"","未说明",{"notes":85,"python":83,"dependencies":86},"Awesome-LLM 本身不是一个可运行的软件工具或框架，而是一个 curated list（精选列表），主要包含关于大语言模型（LLM）的论文、开源项目、数据集、评估基准、训练框架、推理工具、教程和书籍等资源的链接集合。因此，它没有具体的操作系统、GPU、内存、Python 版本或依赖库的安装需求。用户需要根据列表中具体引用的子项目（如 TinyZero, DeepSeek-R1 等）的各自文档来配置相应的运行环境。",[],[35,14,88],"其他","2026-03-27T02:49:30.150509","2026-04-07T09:51:17.906753",[92,97,102,107,112,117],{"id":93,"question_zh":94,"answer_zh":95,"source_url":96},21566,"有哪些可以本地部署并用于分析代码库架构的 LLM 工具？","您可以使用以下工具在本地系统上运行 LLM 以分析代码库：\n1. PrivateGPT (https:\u002F\u002Fgithub.com\u002Fimartinez\u002FprivateGPT)\n2. GPT-Index \u002F LlamaIndex (https:\u002F\u002Fgpt-index.readthedocs.io\u002Fen\u002Flatest\u002F)\n3. LangChain (https:\u002F\u002Fpython.langchain.com\u002Fen\u002Flatest\u002F)\n这些项目支持加载本地代码库并进行问答交互。","https:\u002F\u002Fgithub.com\u002FHannibal046\u002FAwesome-LLM\u002Fissues\u002F32",{"id":98,"question_zh":99,"answer_zh":100,"source_url":101},21567,"Awesome-LLM 列表中是否包含 LLM 推理引擎？","是的，常见的推理引擎如 llama.cpp, exllama, vLLM, TGI 等已被收录在仓库的 `LLM Deployment`（LLM 部署）章节中。","https:\u002F\u002Fgithub.com\u002FHannibal046\u002FAwesome-LLM\u002Fissues\u002F131",{"id":103,"question_zh":104,"answer_zh":105,"source_url":106},21568,"我想贡献一篇关于问答系统大数据架构的论文，但它不完全是针对 LLM 的，适合收录吗？","如果论文主要关注通用大数据架构且与大型语言模型（LLM）的重叠不多，目前可能不太适合收录。维护者建议，如果该架构能明确展示与 ChatGPT 或其他 LLM 的结合应用（例如后续版本计划使用 ChatGPT），或者能进一步阐述其与 LLM 的具体关系，则会更加相关。","https:\u002F\u002Fgithub.com\u002FHannibal046\u002FAwesome-LLM\u002Fissues\u002F30",{"id":108,"question_zh":109,"answer_zh":110,"source_url":111},21569,"在哪里可以找到高质量的 LLM 数据集资源？","除了本仓库外，您还可以参考以下专门收集 LLM 数据集的 GitHub 仓库：\n1. LLMDataHub: https:\u002F\u002Fgithub.com\u002FZjh-819\u002FLLMDataHub\n2. Awesome Instruction Dataset: https:\u002F\u002Fgithub.com\u002FyaodongC\u002Fawesome-instruction-dataset","https:\u002F\u002Fgithub.com\u002FHannibal046\u002FAwesome-LLM\u002Fissues\u002F92",{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},21570,"有关于 LLM 对齐（Alignment）和指令微调（Instruction-Tuning）的综合综述推荐吗？","推荐查阅综述论文《Aligning Large Language Models with Human: A Survey》。该论文提供了全面的回顾，并配有相关的 GitHub 代码仓库，非常适合了解指令微调和 LLM 对齐领域的最新进展。","https:\u002F\u002Fgithub.com\u002FHannibal046\u002FAwesome-LLM\u002Fissues\u002F65",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},21571,"有人提到用对数数学（Logarithmic Math）加速神经网络计算，有相关依据或结果吗？","关于将对数运算应用于神经网络矩阵乘法以加速计算的想法，社区中有相关讨论和研究。您可以参考这篇论文获取更多信息：https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.09876.pdf。不过目前仍需关注具体的实证结果。","https:\u002F\u002Fgithub.com\u002FHannibal046\u002FAwesome-LLM\u002Fissues\u002F66",[]]