[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SafeAILab--EAGLE":3,"tool-SafeAILab--EAGLE":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":10,"env_os":92,"env_gpu":93,"env_ram":92,"env_deps":94,"category_tags":103,"github_topics":104,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":108,"updated_at":109,"faqs":110,"releases":140},2670,"SafeAILab\u002FEAGLE","EAGLE","Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).","EAGLE 是一套专为大型语言模型（LLM）设计的高效解码加速方案，旨在显著提升文本生成速度，同时确保输出质量与原始模型完全一致。它主要解决了传统自回归解码速度慢、其他加速方法难以兼顾效率与精度的痛点。\n\n作为目前经第三方评测认证最快的推测性解码方法，EAGLE 通过外推模型中间层的上下文特征向量来预测后续令牌，从而大幅减少计算开销。其技术演进亮点显著：EAGLE-2 引入动态树结构调整机制，进一步优化性能；最新的 EAGLE-3 则突破性地融合了多层级语义特征，在无需特征预测约束的情况下实现了无损加速。实测数据显示，在 13B 参数模型上，EAGLE-3 的生成速度可达传统方法的 5.6 倍，且仅需消费级显卡（如 8 张 RTX 3090）即可完成训练与部署，对资源有限的团队十分友好。\n\n此外，EAGLE 具备良好的兼容性，可无缝集成至 vLLM、DeepSpeed、FlashAttention 等主流推理框架及硬件优化方案中。这套工具非常适合 AI 研究人员、后端开发工程师以及希望降低推理成本的企业用户，帮助他们在不牺牲模型效果的前提下，实现更流畅、更低延迟的大模型应用体验。","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_320a59ab21f8.png\" alt=\"EAGLE\" width=\"220\" align=\"left\">\u003Cdiv align=\"center\">\u003Ch1>&nbsp;EAGLE\u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n| \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.15077.pdf\">\u003Cb>Paper (EAGLE)\u003C\u002Fb>\u003C\u002Fa> | \n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.16858\">\u003Cb>Paper (EAGLE-2)\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01840\">\u003Cb>Paper (EAGLE-3)\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Fsites.google.com\u002Fview\u002F\neagle-llm\">\u003Cb>Blog\u003C\u002Fb>\u003C\u002Fa> |\n\u003C\u002Fp>\n\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVersion-v3.0.0-orange.svg\" alt=\"Version\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue.svg\" alt=\"License\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-yes-green.svg\" alt=\"Maintenance\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fpulls\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContributions-welcome-brightgreen.svg?style=flat\" alt=\"Contributions welcome\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\n##\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_41a0ecd66eca.jpg\" alt=\"benchmark\" width=\"790\">\n\u003C\u002Fp>\n\nEAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) is a new baseline for fast decoding of Large Language Models (LLMs) with provable performance maintenance. This approach involves extrapolating the second-top-layer contextual feature vectors of LLMs, enabling a significant boost in generation efficiency. \n\n- EAGLE is:\n\t- certified by the \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhemingkx\u002FSpec-Bench\u002Fblob\u002Fmain\u002FLeaderboard.md\">\u003Cb>third-party\u003C\u002Fb>\u003C\u002Fa> evaluation as the **fastest** speculative method so far. \n\t- achieving **2x** speedup on \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Fgpt-fast\">\u003Cb>gpt-fast\u003C\u002Fb>\u003C\u002Fa>.\n\t- **3x** faster than vanilla decoding (13B).\n \t- **2x** faster than \u003Ca href=\"https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-11-21-lookahead-decoding\u002F\">\u003Cb>Lookahead\u003C\u002Fb>\u003C\u002Fa> (13B).\n \t- **1.6x** faster than \u003Ca href=\"https:\u002F\u002Fsites.google.com\u002Fview\u002Fmedusa-llm\">\u003Cb>Medusa\u003C\u002Fb>\u003C\u002Fa> (13B).\n  \t- provably maintaining the consistency with vanilla decoding in the distribution of generated texts.\n  \t- trainable (within 1-2 days) and testable on 8x RTX 3090 GPUs. So even the GPU poor can afford it.\n\t- combinable with other parallelled techniques such as vLLM, DeepSpeed, Mamba, FlashAttention, quantization, and hardware optimization.\n\nEAGLE-2 uses the confidence scores from the draft model to approximate acceptance rates, dynamically adjusting the draft tree structure, which further enhances performance.\n\n- EAGLE-2 is:\n  - **4x** faster than vanilla decoding (13B).\n  - **1.4x** faster than EAGLE-1 (13B).\n\nEAGLE-3 removes the feature prediction constraint in EAGLE and simulates this process during training using training-time testing. Considering that top-layer features are limited to next-token prediction, EAGLE-3 replaces them with a fusion of low-, mid-, and high-level semantic features. \nEAGLE-3 further improves generation speed while ensuring lossless performance.\n\n- EAGLE-3 is:\n  - **5.6** faster than vanilla decoding (13B).\n  - **1.8x** faster than EAGLE-1 (13B).\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_e08cb74b4fe9.gif\" alt=\"demogif\" width=\"600\">\n\u003C\u002Fp>\n\n_Inference is conducted on 2x RTX 3090 GPUs at fp16 precision using the Vicuna 13B model._\n\n\n[\u002F\u002F]: # ()\n[\u002F\u002F]: # ()\n[\u002F\u002F]: # (Using EAGLE-2, the inference speed on 2 RTX 3060 GPUs can be faster than vanilla autoregressive decoding on an A100 GPU.)\n\n## Support\nEAGLE has been merged in the following mainstream LLM serving frameworks (listed in alphabetical order).\n\n- \u003Ca href=\"https:\u002F\u002Frocm.blogs.amd.com\u002Fsoftware-tools-optimization\u002Fmtp\u002FREADME.html\">AMD ROCm\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fangelslim.readthedocs.io\u002Fzh-cn\u002Flatest\u002Ffeatures\u002Fspeculative_decoding\u002Feagle.html\">AngelSlim\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fawsdocs-neuron.readthedocs-hosted.com\u002Fen\u002Flatest\u002Flibraries\u002Fnxd-inference\u002Fdeveloper_guides\u002Ffeature-guide.html#eagle-speculative-decoding\">AWS NeuronX Distributed Core\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FCPM.cu\">CPM.cu\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-transformers\u002Fpull\u002F1504\">Intel® Extension for Transformers\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fintel-analytics\u002Fipex-llm\u002Fpull\u002F11104\">Intel® LLM Library for PyTorch\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fllm.mlc.ai\u002Fdocs\u002Fdeploy\u002Frest.html\">MLC-LLM\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.nvidia.com\u002Fnemo-framework\u002Fuser-guide\u002Flatest\u002Fmodel-optimization\u002Fspeculative\u002Fspeculative.html\">NVIDIA NeMo Framework\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM\u002Ftree\u002Fmain\u002Fexamples\u002Feagle\">NVIDIA TensorRT-LLM\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fnvidia.github.io\u002FTensorRT-Model-Optimizer\u002Fguides\u002F7_speculative_decoding.html\">NVIDIA TensorRT Model Optimizer\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fpaddlenlp.readthedocs.io\u002Fen\u002Flatest\u002Fllm\u002Fdocs\u002Fpredict\u002Fspeculative_decoding.html\">PaddleNLP\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.sglang.ai\u002Fadvanced_features\u002Fspeculative_decoding.html\">SGLang\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsgl-project\u002FSpecForge\">SpecForge\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fspeculators\">speculators\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm\u002Fpull\u002F16937\">vLLM\u003C\u002Fa>\n\n\n## Update\n**2025.9.18**: EAGLE-3 is accepted to NeurIPS'25.\n\n**2025.7.23**: We strongly recommend using [SpecForge](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002FSpecForge) for out-of-the-box training of EAGLE-3 with SGLang.\n\n**2025.3.19**: EAGLE-3 is released.\n\n**2024.8.8**: We now support Qwen-2.\n\n**2024.6.27**: EAGLE-2 is released.\n\n**2024.2.25**: EAGLE is certified by the \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhemingkx\u002FSpec-Bench\u002Fblob\u002Fmain\u002FLeaderboard.md\">third-party\u003C\u002Fa> evaluation as the fastest speculative method.\n\n**2024.1.17**: We now support [Mixtral-8x7B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1).\n\n**2023.12.8**: EAGLE v1.0 is released.\n\n\n\n## Todo\n- [x] Support non-greedy inference (provably maintaining text distribution).\n- [x] Support more LLMs such as Mixtral 8x7B.\n- [x] Support LLaMA-3.\n- [x] Support Qwen-2.\n- [x] Support vLLM (please check \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm\u002Fpull\u002F6830\">vLLM\u003C\u002Fa>'s implementation).\n- [x] EAGLE-3.\n- [x] Training code of EAGLE-3.\n- [x] Support LLaMA-4.\n- [ ] Support official EAGLE-3 for Qwen-3.\n- [ ] EAGLE-4.\n\n## The default main branch is the implementation of EAGLE-3 and EAGLE-2. For using EAGLE-1, please switch to the v1 branch.\n\n## Contents\n\n- [Setup & Installation](#setup--installation)\n- [EAGLE-3 Weights](#eagle-3-weights)\n- [EAGLE Weights](#eagle-weights)\n- [Inference](#inference)\n  - [With UI](#with-ui)\n  - [With Code](#with-code)\n- [Train](#train)\n  - [Generate Train Data](#generate-train-data)\n  - [Train the Auto-regression Head](#train-the-auto-regression-head)\n  - [Inference on custom models](#inference-on-custom-models)\n- [Evaluation](#evaluation)\n\n\n## Setup & Installation\n\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE.git\ncd EAGLE\npython -m venv ~\u002Fvenvs\u002Fea_env\nsource ~\u002Fvenvs\u002Fea_env\u002Fbin\u002Factivate\npip install -r requirements.txt\n```\n## EAGLE-3 Weights\n\n*Note:* This repository recognizes only official EAGLE-3 checkpoints. Performance of unofficial checkpoints may vary. If you want to compare with EAGLE-3, please compare with official checkpoints and official draft tree setups.\n\n## EAGLE-3 Models on Hugging Face\n\n| Base Model | EAGLE-3 Model(s) | Official |\n|-----------|-----------------|----------|\n| **Vicuna-13B v1.3**\u003Cbr>[lmsys\u002Fvicuna-13b-v1.3](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fvicuna-13b-v1.3) | [yuhuili\u002FEAGLE3-Vicuna1.3-13B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-Vicuna1.3-13B) | Yes |\n| **LLaMA-3.1-8B-Instruct**\u003Cbr>[meta-llama\u002FLlama-3.1-8B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.1-8B-Instruct) | [yuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B) | Yes |\n| **LLaMA-3.3-70B-Instruct**\u003Cbr>[meta-llama\u002FLlama-3.3-70B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.3-70B-Instruct) | [yuhuili\u002FEAGLE3-LLaMA3.3-Instruct-70B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-LLaMA3.3-Instruct-70B) | Yes |\n| **DeepSeek-R1-Distill-LLaMA-8B**\u003Cbr>[deepseek-ai\u002FDeepSeek-R1-Distill-Llama-8B](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002FDeepSeek-R1-Distill-Llama-8B) | [yuhuili\u002FEAGLE3-DeepSeek-R1-Distill-LLaMA-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-DeepSeek-R1-Distill-LLaMA-8B) | Yes |\n| **LLaMA-4-Scout-17B-16E-Instruct**\u003Cbr>[meta-llama\u002FLlama-4-Scout-17B-16E-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-4-Scout-17B-16E-Instruct) | [lmsys\u002Fsglang-EAGLE3-Llama-4-Scout-17B-16E-Instruct-v1](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fsglang-EAGLE3-Llama-4-Scout-17B-16E-Instruct-v1) | No |\n| **LLaMA-4-Maverick-17B-128E-Instruct**\u003Cbr>[meta-llama\u002FLlama-4-Maverick-17B-128E-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-4-Maverick-17B-128E-Instruct) | [lmsys\u002Fsglang-EAGLE3-Llama-4-Maverick-17B-128E-Instruct-v1](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fsglang-EAGLE3-Llama-4-Maverick-17B-128E-Instruct-v1)\u003Cbr>[nvidia\u002FLlama-4-Maverick-17B-128E-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002FLlama-4-Maverick-17B-128E-Eagle3) | No |\n| **Qwen3-1.7B**\u003Cbr>[Qwen\u002FQwen3-1.7B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-1.7B) | [AngelSlim\u002FQwen3-1.7B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-1.7B_eagle3) | No |\n| **Qwen3-4B**\u003Cbr>[Qwen\u002FQwen3-4B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-4B) | [AngelSlim\u002FQwen3-4B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-4B_eagle3) | No |\n| **Qwen3-8B**\u003Cbr>[Qwen\u002FQwen3-8B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-8B) | [Tengyunw\u002Fqwen3_8b_eagle3](https:\u002F\u002Fhuggingface.co\u002FTengyunw\u002Fqwen3_8b_eagle3)\u003Cbr>[AngelSlim\u002FQwen3-8B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-8B_eagle3)\u003Cbr>[Zjcxy-SmartAI\u002FEagle3-Qwen3-8B-zh](https:\u002F\u002Fhuggingface.co\u002FZjcxy-SmartAI\u002FEagle3-Qwen3-8B-zh) | No |\n| **Qwen3-14B**\u003Cbr>[Qwen\u002FQwen3-14B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-14B) | [AngelSlim\u002FQwen3-14B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-14B_eagle3) | No |\n| **Qwen3-30B-A3B**\u003Cbr>[Qwen\u002FQwen3-30B-A3B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-30B-A3B) | [Tengyunw\u002Fqwen3_30b_moe_eagle3](https:\u002F\u002Fhuggingface.co\u002FTengyunw\u002Fqwen3_30b_moe_eagle3)\u003Cbr>[AngelSlim\u002FQwen3-a3B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-a3B_eagle3) | No |\n| **Qwen3-32B**\u003Cbr>[Qwen\u002FQwen3-32B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-32B) | [AngelSlim\u002FQwen3-32B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-32B_eagle3)\u003Cbr>[Zjcxy-SmartAI\u002FEagle3-Qwen3-32B-zh](https:\u002F\u002Fhuggingface.co\u002FZjcxy-SmartAI\u002FEagle3-Qwen3-32B-zh) | No |\n| **Qwen3-235B-A22B**\u003Cbr>[Qwen\u002FQwen3-235B-A22B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-235B-A22B) | [nvidia\u002FQwen3-235B-A22B-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002FQwen3-235B-A22B-Eagle3)\u003Cbr>[lmsys\u002FQwen3-235B-A22B-EAGLE3](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002FQwen3-235B-A22B-EAGLE3) | No |\n| **MiniCPM4-8B**\u003Cbr>[openbmb\u002FMiniCPM4-8B](https:\u002F\u002Fhuggingface.co\u002Fopenbmb\u002FMiniCPM4-8B) | [linglingdan\u002FEagle3_for_MiniCPM4](https:\u002F\u002Fmodelscope.cn\u002Fmodels\u002Flinglingdan\u002FEagle3_for_MiniCPM4) | No |\n| **OLMoE-1B-7B-Instruct**\u003Cbr>[allenai\u002FOLMoE-1B-7B-0125-Instruct](https:\u002F\u002Fhuggingface.co\u002Fallenai\u002FOLMoE-1B-7B-0125-Instruct) | [wantsleep\u002FOLMoE_1B_7B_Eagle3](https:\u002F\u002Fhuggingface.co\u002Fwantsleep\u002FOLMoE_1B_7B_Eagle3) | No |\n| **granite-3.1-1b-a400m-instruct**\u003Cbr>[ibm-granite\u002Fgranite-3.1-1b-a400m-instruct](https:\u002F\u002Fhuggingface.co\u002Fibm-granite\u002Fgranite-3.1-1b-a400m-instruct) | [wantsleep\u002Fgranite-3.1-1b-a400m-EAGLE3](https:\u002F\u002Fhuggingface.co\u002Fwantsleep\u002Fgranite-3.1-1b-a400m-EAGLE3) | No |\n| **GPT-OSS-120B**\u003Cbr>[openai\u002Fgpt-oss-120b](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fgpt-oss-120b) | [lmsys\u002FEAGLE3-gpt-oss-120b-bf16](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002FEAGLE3-gpt-oss-120b-bf16)\u003Cbr>[nvidia\u002Fgpt-oss-120b-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002Fgpt-oss-120b-Eagle3) | No |\n| **GLM-4.7-Flash**\u003Cbr>[zai-org\u002FGLM-4.7-Flash](https:\u002F\u002Fhuggingface.co\u002Fzai-org\u002FGLM-4.7-Flash) | [thoughtworks\u002FGLM-4.7-Flash-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fthoughtworks\u002FGLM-4.7-Flash-Eagle3) | No |\n\n \n\n\n\n## EAGLE Weights\n\n*Note:* The current code defaults to using EAGLE-3. If you want to use EAGLE weights, please specify `use_eagle3=False` in `EaModel.from_pretrained`.\n\n*Note:* When Qwen2 is the target model, please use bf16 precision instead of fp16 to avoid numerical overflow. The training dataset for the draft model of Qwen2 is ShareGPT, which has removed non-English data. Therefore, if you want to use it on non-English data such as Chinese, please train with the corresponding data.\n\n\n[\u002F\u002F]: # (Compared to EAGLE, EAGLE-2 does not require additional training and uses the same weights.)\n\n## EAGLE Models on Hugging Face\n\n| Base Model | EAGLE Model | # EAGLE Parameters | Official |\n|-----------|------------|-------------------|----------|\n| **Vicuna-7B v1.3** | [yuhuili\u002FEAGLE-Vicuna-7B-v1.3](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Vicuna-7B-v1.3) | 0.24B | Yes |\n| **Vicuna-13B v1.3** | [yuhuili\u002FEAGLE-Vicuna-13B-v1.3](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Vicuna-13B-v1.3) | 0.37B | Yes |\n| **Vicuna-33B v1.3** | [yuhuili\u002FEAGLE-Vicuna-33B-v1.3](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Vicuna-33B-v1.3) | 0.56B | Yes |\n| **LLaMA2-Chat 7B** | [yuhuili\u002FEAGLE-llama2-chat-7B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-llama2-chat-7B) | 0.24B | Yes |\n| **LLaMA2-Chat 13B** | [yuhuili\u002FEAGLE-llama2-chat-13B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-llama2-chat-13B) | 0.37B | Yes |\n| **LLaMA2-Chat 70B** | [yuhuili\u002FEAGLE-llama2-chat-70B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-llama2-chat-70B) | 0.99B | Yes |\n| **Mixtral-8x7B-Instruct v0.1** | [yuhuili\u002FEAGLE-mixtral-instruct-8x7B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-mixtral-instruct-8x7B) | 0.28B | Yes |\n| **LLaMA3-Instruct 8B** | [yuhuili\u002FEAGLE-LLaMA3-Instruct-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-LLaMA3-Instruct-8B) | 0.25B | Yes |\n| **LLaMA3-Instruct 70B** | [yuhuili\u002FEAGLE-LLaMA3-Instruct-70B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-LLaMA3-Instruct-70B) | 0.99B | Yes |\n| **Qwen2-7B-Instruct** | [yuhuili\u002FEAGLE-Qwen2-7B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Qwen2-7B-Instruct) | 0.26B | Yes |\n| **Qwen2-72B-Instruct** | [yuhuili\u002FEAGLE-Qwen2-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Qwen2-72B-Instruct) | 1.05B | Yes |\n| **LLaMA3.1-Instruct 8B** | [yuhuili\u002FEAGLE-LLaMA3.1-Instruct-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-LLaMA3.1-Instruct-8B) | 0.25B | Yes |\n| **Qwen2.5-14B-Instruct** | [Zjcxy-SmartAI\u002FEagle-Qwen2.5-14B-Instruct](https:\u002F\u002Fhuggingface.co\u002FZjcxy-SmartAI\u002FEagle-Qwen2.5-14B-Instruct) | 0.33B | No |\n\n\n## Inference\nThe inference code we provide automatically allocates model weights (loading a model across multiple GPUs), allowing you to run models that exceed the memory of a single GPU.\n\n### With UI\nWe have provided a suggested web interface, which you can use by running the following command. After the model is fully loaded, a URL will be output in the terminal, which you can enter into your browser to access.\n```bash\npython -m eagle.application.webui --ea-model-path [path of EAGLE weight]\\ \n\t\t--base-model-path [path of the original model]\\\n\t\t--model-type [vicuna\\llama2\\llama3]\\\n        --total-token [int]\n```\nThe *total-token* is the number of draft tokens. For smaller models and advanced GPUs, this value can be set larger. Adjusting according to the specific device and model can achieve better results. If set to -1, EAGLE-2 will automatically configure this parameter.\n\n### With Code\nYou can use our provided \"eagenerate\" for speedup generation just like using 'generate' from Hugging Face. Here is an example.\n```python\nfrom eagle.model.ea_model import EaModel\nfrom fastchat.model import get_conversation_template\nmodel = EaModel.from_pretrained(\n    base_model_path=base_model_path,\n    ea_model_path=EAGLE_model_path,\n    torch_dtype=torch.float16,\n    low_cpu_mem_usage=True,\n    device_map=\"auto\",\n    total_token=-1\n)\nmodel.eval()\nyour_message=\"Hello\"\nconv = get_conversation_template(\"vicuna\")\nconv.append_message(conv.roles[0], your_message)\nconv.append_message(conv.roles[1], None)\nprompt = conv.get_prompt()\ninput_ids=model.tokenizer([prompt]).input_ids\ninput_ids = torch.as_tensor(input_ids).cuda()\noutput_ids=model.eagenerate(input_ids,temperature=0.5,max_new_tokens=512)\noutput=model.tokenizer.decode(output_ids[0])\n```\n\n**_Note: Vicuna, LLaMA2-Chat, and LLaMA3-Instruct are both chat models. You need to use the correct chat template, otherwise it will cause abnormal output from the model and affect the performance of EAGLE._**\n\n\n\n## Train\n\n```bash\ncd eagle\u002Ftraineagle3\ndeepspeed main.py --deepspeed_config ds_config.json\n```\nWe strongly recommend using [SpecForge](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002FSpecForge) for out-of-the-box training of EAGLE-3 with SGLang.\n\n\n\n\n### Inference on custom models\n\nIf the original LLM structure differs from LLaMA and Mixtral, you can utilize EAGLE as follows:\n\nCopy the modeling_basemodelname.py from the Transformers library and proceed to make modifications to leverage the pre-allocated kv_cache for enhanced speed in the base model. You can refer to model\u002Fmodeling_llama_kv.py for guidance, where places that require modifications are annotated with # [MODIFIED]. These modifications are minimal.\n\n\n## Evaluation\nYou can test the speed of EAGLE on MT-bench using the following command. The models will be downloaded automatically and you may need to input your Hugging Face [Access Tokens](https:\u002F\u002Fhuggingface.co\u002Fsettings\u002Ftokens) by ```huggingface-cli login```.\n```bash\npython -m eagle.evaluation.gen_ea_answer_llama3chat --ea-model-path yuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B --base-model-path meta-llama\u002FLlama-3.1-8B-Instruct --use_eagle3\n```\n\n```huggingface-cli login```.\n```bash\npython -m eagle.evaluation.gen_ea_answer_qwen3 --ea-model-path \u002Fworkspace\u002Fyunhai\u002FQwen3-4B_eagle3 --base-model-path Qwen\u002FQwen3-4B --use_eagle3\n```\nIf you need specific acceleration ratios, you will also need to run the following command to get the speed of vanilla auto-regression.\n```bash\npython -m eagle.evaluation.gen_baseline_answer_llama3chat --ea-model-path yuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B --base-model-path meta-llama\u002FLlama-3.1-8B-Instruct\n```\nThe above two commands will each generate a .jsonl file that records the generation results and wall time. Then, you can use evaluation\u002Fspeed.py to calculate the ratio of speeds.\n\n## 🌟 Our Contributors\n\nA heartfelt thank you to all our contributors.\n\n![Contributors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_b58674d4dad7.png)\n\n\n## Reference\nFor technical details and full experimental results, please check [the paper of EAGLE](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.15077.pdf), [the paper of EAGLE-2](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.16858), and [the paper of EAGLE-3](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01840).\n```\n@inproceedings{li2024eagle, \n\tauthor = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang}, \n\ttitle = {{EAGLE}: Speculative Sampling Requires Rethinking Feature Uncertainty}, \n\tbooktitle = {International Conference on Machine Learning},\n\tyear = {2024}\n}\n@inproceedings{li2024eagle2, \n\tauthor = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang}, \n\ttitle = {{EAGLE-2}: Faster Inference of Language Models with Dynamic Draft Trees}, \n\tbooktitle = {Empirical Methods in Natural Language Processing},\n\tyear = {2024}\n}\n@inproceedings{li2025eagle3,\n    author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},\n    title = {{EAGLE-3}: Scaling up Inference Acceleration of Large Language Models via Training-Time Test}, \n    booktitle = {Annual Conference on Neural Information Processing Systems},\n    year = {2025}\n}\n```\n\n## Acknowledgements\n\nThis project has been influenced by many excellent projects in the LLM community, such as [Medusa](https:\u002F\u002Fgithub.com\u002FFasterDecoding\u002FMedusa), [FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat), and others. The logo is designed by GPT-4. We also appreciate many valuable discussions with the SGLang team (James Liu, Ke Bao, Yineng Zhang, Lianmin Zheng, Ying Sheng and many others), Tianle Cai, Hao Zhang, Ziteng Sun, and others.\n","\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_320a59ab21f8.png\" alt=\"EAGLE\" width=\"220\" align=\"left\">\u003Cdiv align=\"center\">\u003Ch1>&nbsp;EAGLE\u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n| \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.15077.pdf\">\u003Cb>论文（EAGLE）\u003C\u002Fb>\u003C\u002Fa> | \n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.16858\">\u003Cb>论文（EAGLE-2）\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01840\">\u003Cb>论文（EAGLE-3）\u003C\u002Fb>\u003C\u002Fa> |\n\u003Ca href=\"https:\u002F\u002Fsites.google.com\u002Fview\u002F\neagle-llm\">\u003Cb>博客\u003C\u002Fb>\u003C\u002Fa> |\n\u003C\u002Fp>\n\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVersion-v3.0.0-orange.svg\" alt=\"版本\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue.svg\" alt=\"许可证\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-yes-green.svg\" alt=\"维护状态\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fpulls\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FContributions-welcome-brightgreen.svg?style=flat\" alt=\"欢迎贡献\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\n##\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_41a0ecd66eca.jpg\" alt=\"benchmark\" width=\"790\">\n\u003C\u002Fp>\n\nEAGLE（用于提升语言模型效率的外推算法）是一种新的基准方法，能够在保证性能的前提下实现大型语言模型（LLMs）的快速解码。该方法通过外推LLMs的次顶层上下文特征向量，显著提升生成效率。\n\n- EAGLE的特点是：\n\t- 经过\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhemingkx\u002FSpec-Bench\u002Fblob\u002Fmain\u002FLeaderboard.md\">\u003Cb>第三方\u003C\u002Fb>\u003C\u002Fa>评估认证，目前是**最快**的推测性解码方法。\n\t- 在\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpytorch-labs\u002Fgpt-fast\">\u003Cb>gpt-fast\u003C\u002Fb>\u003C\u002Fa>上实现了**2倍**的速度提升。\n\t- 比原生解码快**3倍**（13B）。\n \t- 比\u003Ca href=\"https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-11-21-lookahead-decoding\u002F\">\u003Cb>Lookahead\u003C\u002Fb>\u003C\u002Fa>快**2倍**（13B）。\n \t- 比\u003Ca href=\"https:\u002F\u002Fsites.google.com\u002Fview\u002Fmedusa-llm\">\u003Cb>Medusa\u003C\u002Fb>\u003C\u002Fa>快**1.6倍**（13B）。\n  \t- 能够在生成文本分布上证明与原生解码的一致性。\n  \t- 训练时间仅需1-2天，可在8张RTX 3090显卡上进行测试，因此即使显卡资源有限的用户也能负担得起。\n\t- 可与其他并行化技术结合使用，如vLLM、DeepSpeed、Mamba、FlashAttention、量化以及硬件优化等。\n\nEAGLE-2利用草稿模型的置信度分数来近似接受率，动态调整草稿树结构，从而进一步提升性能。\n\n- EAGLE-2的特点是：\n  - 比原生解码快**4倍**（13B）。\n  - 比EAGLE-1快**1.4倍**（13B）。\n\nEAGLE-3取消了EAGLE中的特征预测约束，并在训练过程中通过训练时测试来模拟这一过程。考虑到顶层特征仅限于下一个标记的预测，EAGLE-3用低、中、高层次语义特征的融合来替代它们。EAGLE-3在确保无损性能的同时，进一步提升了生成速度。\n\n- EAGLE-3的特点是：\n  - 比原生解码快**5.6倍**（13B）。\n  - 比EAGLE-1快**1.8倍**（13B）。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_e08cb74b4fe9.gif\" alt=\"demogif\" width=\"600\">\n\u003C\u002Fp>\n\n_推理在2张RTX 3090显卡上以fp16精度使用Vicuna 13B模型进行。_\n\n\n[\u002F\u002F]: # ()\n[\u002F\u002F]: # ()\n[\u002F\u002F]: # (使用EAGLE-2，在2张RTX 3060显卡上进行的推理速度可以超过A100显卡上的原生自回归解码。)\n\n## 支持\nEAGLE已被合并到以下主流LLM服务框架中（按字母顺序排列）。\n\n- \u003Ca href=\"https:\u002F\u002Frocm.blogs.amd.com\u002Fsoftware-tools-optimization\u002Fmtp\u002FREADME.html\">AMD ROCm\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fangelslim.readthedocs.io\u002Fzh-cn\u002Flatest\u002Ffeatures\u002Fspeculative_decoding\u002Feagle.html\">AngelSlim\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fawsdocs-neuron.readthedocs-hosted.com\u002Fen\u002Flatest\u002Flibraries\u002Fnxd-inference\u002Fdeveloper_guides\u002Ffeature-guide.html#eagle-speculative-decoding\">AWS NeuronX Distributed Core\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FCPM.cu\">CPM.cu\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fintel\u002Fintel-extension-for-transformers\u002Fpull\u002F1504\">英特尔® Transformer扩展\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fintel-analytics\u002Fipex-llm\u002Fpull\u002F11104\">英特尔® PyTorch LLM库\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fllm.mlc.ai\u002Fdocs\u002Fdeploy\u002Frest.html\">MLC-LLM\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.nvidia.com\u002Fnemo-framework\u002Fuser-guide\u002Flatest\u002Fmodel-optimization\u002Fspeculative\u002Fspeculative.html\">NVIDIA NeMo框架\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM\u002Ftree\u002Fmain\u002Fexamples\u002Feagle\">NVIDIA TensorRT-LLM\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fnvidia.github.io\u002FTensorRT-Model-Optimizer\u002Fguides\u002F7_speculative_decoding.html\">NVIDIA TensorRT模型优化器\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fpaddlenlp.readthedocs.io\u002Fen\u002Flatest\u002Fllm\u002Fdocs\u002Fpredict\u002Fspeculative_decoding.html\">PaddleNLP\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fdocs.sglang.ai\u002Fadvanced_features\u002Fspeculative_decoding.html\">SGLang\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsgl-project\u002FSpecForge\">SpecForge\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fspeculators\">speculators\u003C\u002Fa>\n- \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm\u002Fpull\u002F16937\">vLLM\u003C\u002Fa>\n\n\n## 更新\n**2025.9.18**: EAGLE-3被NeurIPS'25接收。\n\n**2025.7.23**: 我们强烈建议使用[SpecForge](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002FSpecForge)，配合SGLang即可直接训练EAGLE-3。\n\n**2025.3.19**: EAGLE-3发布。\n\n**2024.8.8**: 现在支持Qwen-2。\n\n**2024.6.27**: EAGLE-2发布。\n\n**2024.2.25**: EAGLE经\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhemingkx\u002FSpec-Bench\u002Fblob\u002Fmain\u002FLeaderboard.md\">第三方\u003C\u002Fa>评估认证为最快的推测性解码方法。\n\n**2024.1.17**: 现在支持[Mixtral-8x7B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1)。\n\n**2023.12.8**: EAGLE v1.0发布。\n\n\n\n## 待办事项\n- [x] 支持非贪婪推理（可证明保持文本分布一致）。\n- [x] 支持更多LLMs，例如Mixtral 8x7B。\n- [x] 支持LLaMA-3。\n- [x] 支持Qwen-2。\n- [x] 支持vLLM（请查看\u003CvLLM>的实现）。\n- [x] EAGLE-3。\n- [x] EAGLE-3的训练代码。\n- [x] 支持LLaMA-4。\n- [ ] 支持Qwen-3的官方EAGLE-3。\n- [ ] EAGLE-4。\n\n## 默认主分支是EAGLE-3和EAGLE-2的实现。若要使用EAGLE-1，请切换至v1分支。\n\n## 目录\n\n- [设置与安装](#setup--installation)\n- [EAGLE-3权重](#eagle-3-weights)\n- [EAGLE权重](#eagle-weights)\n- [推理](#inference)\n  - [通过UI](#with-ui)\n  - [通过代码](#with-code)\n- [训练](#train)\n  - [生成训练数据](#generate-train-data)\n  - [训练自回归头](#train-the-auto-regression-head)\n  - [对自定义模型进行推理](#inference-on-custom-models)\n- [评估](#evaluation)\n\n\n## 设置与安装\n\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE.git\ncd EAGLE\npython -m venv ~\u002Fvenvs\u002Fea_env\nsource ~\u002Fvenvs\u002Fea_env\u002Fbin\u002Factivate\npip install -r requirements.txt\n```\n\n## EAGLE-3 权重\n\n*注：* 本仓库仅支持官方的EAGLE-3检查点。非官方检查点的性能可能会有所不同。如果您想与EAGLE-3进行对比，请务必使用官方检查点及官方草稿树设置。\n\n## Hugging Face 上的EAGLE-3 模型\n\n| 基础模型 | EAGLE-3 模型 | 官方 |\n|-----------|-----------------|----------|\n| **Vicuna-13B v1.3**\u003Cbr>[lmsys\u002Fvicuna-13b-v1.3](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fvicuna-13b-v1.3) | [yuhuili\u002FEAGLE3-Vicuna1.3-13B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-Vicuna1.3-13B) | 是 |\n| **LLaMA-3.1-8B-Instruct**\u003Cbr>[meta-llama\u002FLlama-3.1-8B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.1-8B-Instruct) | [yuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B) | 是 |\n| **LLaMA-3.3-70B-Instruct**\u003Cbr>[meta-llama\u002FLlama-3.3-70B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.3-70B-Instruct) | [yuhuili\u002FEAGLE3-LLaMA3.3-Instruct-70B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-LLaMA3.3-Instruct-70B) | 是 |\n| **DeepSeek-R1-Distill-LLaMA-8B**\u003Cbr>[deepseek-ai\u002FDeepSeek-R1-Distill-Llama-8B](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002FDeepSeek-R1-Distill-Llama-8B) | [yuhuili\u002FEAGLE3-DeepSeek-R1-Distill-LLaMA-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE3-DeepSeek-R1-Distill-LLaMA-8B) | 是 |\n| **LLaMA-4-Scout-17B-16E-Instruct**\u003Cbr>[meta-llama\u002FLlama-4-Scout-17B-16E-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-4-Scout-17B-16E-Instruct) | [lmsys\u002Fsglang-EAGLE3-Llama-4-Scout-17B-16E-Instruct-v1](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fsglang-EAGLE3-Llama-4-Scout-17B-16E-Instruct-v1) | 否 |\n| **LLaMA-4-Maverick-17B-128E-Instruct**\u003Cbr>[meta-llama\u002FLlama-4-Maverick-17B-128E-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-4-Maverick-17B-128E-Instruct) | [lmsys\u002Fsglang-EAGLE3-Llama-4-Maverick-17B-128E-Instruct-v1](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fsglang-EAGLE3-Llama-4-Maverick-17B-128E-Instruct-v1)\u003Cbr>[nvidia\u002FLlama-4-Maverick-17B-128E-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002FLlama-4-Maverick-17B-128E-Eagle3) | 否 |\n| **Qwen3-1.7B**\u003Cbr>[Qwen\u002FQwen3-1.7B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-1.7B) | [AngelSlim\u002FQwen3-1.7B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-1.7B_eagle3) | 否 |\n| **Qwen3-4B**\u003Cbr>[Qwen\u002FQwen3-4B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-4B) | [AngelSlim\u002FQwen3-4B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-4B_eagle3) | 否 |\n| **Qwen3-8B**\u003Cbr>[Qwen\u002FQwen3-8B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-8B) | [Tengyunw\u002Fqwen3_8b_eagle3](https:\u002F\u002Fhuggingface.co\u002FTengyunw\u002Fqwen3_8b_eagle3)\u003Cbr>[AngelSlim\u002FQwen3-8B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-8B_eagle3)\u003Cbr>[Zjcxy-SmartAI\u002FEagle3-Qwen3-8B-zh](https:\u002F\u002Fhuggingface.co\u002FZjcxy-SmartAI\u002FEagle3-Qwen3-8B-zh) | 否 |\n| **Qwen3-14B**\u003Cbr>[Qwen\u002FQwen3-14B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-14B) | [AngelSlim\u002FQwen3-14B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-14B_eagle3) | 否 |\n| **Qwen3-30B-A3B**\u003Cbr>[Qwen\u002FQwen3-30B-A3B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-30B-A3B) | [Tengyunw\u002Fqwen3_30b_moe_eagle3](https:\u002F\u002Fhuggingface.co\u002FTengyunw\u002Fqwen3_30b_moe_eagle3)\u003Cbr>[AngelSlim\u002FQwen3-a3B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-a3B_eagle3) | 否 |\n| **Qwen3-32B**\u003Cbr>[Qwen\u002FQwen3-32B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-32B) | [AngelSlim\u002FQwen3-32B_eagle3](https:\u002F\u002Fhuggingface.co\u002FAngelSlim\u002FQwen3-32B_eagle3)\u003Cbr>[Zjcxy-SmartAI\u002FEagle3-Qwen3-32B-zh](https:\u002F\u002Fhuggingface.co\u002FZjcxy-SmartAI\u002FEagle3-Qwen3-32B-zh) | 否 |\n| **Qwen3-235B-A22B**\u003Cbr>[Qwen\u002FQwen3-235B-A22B](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3-235B-A22B) | [nvidia\u002FQwen3-235B-A22B-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002FQwen3-235B-A22B-Eagle3)\u003Cbr>[lmsys\u002FQwen3-235B-A22B-EAGLE3](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002FQwen3-235B-A22B-EAGLE3) | 否 |\n| **MiniCPM4-8B**\u003Cbr>[openbmb\u002FMiniCPM4-8B](https:\u002F\u002Fhuggingface.co\u002Fopenbmb\u002FMiniCPM4-8B) | [linglingdan\u002FEagle3_for_MiniCPM4](https:\u002F\u002Fmodelscope.cn\u002Fmodels\u002Flinglingdan\u002FEagle3_for_MiniCPM4) | 否 |\n| **OLMoE-1B-7B-Instruct**\u003Cbr>[allenai\u002FOLMoE-1B-7B-0125-Instruct](https:\u002F\u002Fhuggingface.co\u002Fallenai\u002FOLMoE-1B-7B-0125-Instruct) | [wantsleep\u002FOLMoE_1B_7B_Eagle3](https:\u002F\u002Fhuggingface.co\u002Fwantsleep\u002FOLMoE_1B_7B_Eagle3) | 否 |\n| **granite-3.1-1b-a400m-instruct**\u003Cbr>[ibm-granite\u002Fgranite-3.1-1b-a400m-instruct](https:\u002F\u002Fhuggingface.co\u002Fibm-granite\u002Fgranite-3.1-1b-a400m-instruct) | [wantsleep\u002Fgranite-3.1-1b-a400m-EAGLE3](https:\u002F\u002Fhuggingface.co\u002Fwantsleep\u002Fgranite-3.1-1b-a400m-EAGLE3) | 否 |\n| **GPT-OSS-120B**\u003Cbr>[openai\u002Fgpt-oss-120b](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fgpt-oss-120b) | [lmsys\u002FEAGLE3-gpt-oss-120b-bf16](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002FEAGLE3-gpt-oss-120b-bf16)\u003Cbr>[nvidia\u002Fgpt-oss-120b-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fnvidia\u002Fgpt-oss-120b-Eagle3) | 否 |\n| **GLM-4.7-Flash**\u003Cbr>[zai-org\u002FGLM-4.7-Flash](https:\u002F\u002Fhuggingface.co\u002Fzai-org\u002FGLM-4.7-Flash) | [thoughtworks\u002FGLM-4.7-Flash-Eagle3](https:\u002F\u002Fhuggingface.co\u002Fthoughtworks\u002FGLM-4.7-Flash-Eagle3) | 否 |\n\n \n\n## EAGLE 权重\n\n*注：* 当前代码默认使用EAGLE-3。如果您希望使用EAGLE权重，请在`EaModel.from_pretrained`中指定`use_eagle3=False`。\n\n*注：* 当目标模型为Qwen2时，请使用bf16精度而非fp16，以避免数值溢出。Qwen2的草稿模型训练数据集为ShareGPT，该数据集已移除非英文内容。因此，若您希望在中文等非英文数据上使用该模型，请使用相应数据进行训练。\n\n\n[\u002F\u002F]: # (相较于EAGLE，EAGLE-2无需额外训练，且使用相同的权重。)\n\n## Hugging Face 上的 EAGLE 模型\n\n| 基础模型 | EAGLE 模型 | EAGLE 参数量 | 官方 |\n|-----------|------------|-------------------|----------|\n| **Vicuna-7B v1.3** | [yuhuili\u002FEAGLE-Vicuna-7B-v1.3](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Vicuna-7B-v1.3) | 0.24B | 是 |\n| **Vicuna-13B v1.3** | [yuhuili\u002FEAGLE-Vicuna-13B-v1.3](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Vicuna-13B-v1.3) | 0.37B | 是 |\n| **Vicuna-33B v1.3** | [yuhuili\u002FEAGLE-Vicuna-33B-v1.3](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Vicuna-33B-v1.3) | 0.56B | 是 |\n| **LLaMA2-Chat 7B** | [yuhuili\u002FEAGLE-llama2-chat-7B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-llama2-chat-7B) | 0.24B | 是 |\n| **LLaMA2-Chat 13B** | [yuhuili\u002FEAGLE-llama2-chat-13B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-llama2-chat-13B) | 0.37B | 是 |\n| **LLaMA2-Chat 70B** | [yuhuili\u002FEAGLE-llama2-chat-70B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-llama2-chat-70B) | 0.99B | 是 |\n| **Mixtral-8x7B-Instruct v0.1** | [yuhuili\u002FEAGLE-mixtral-instruct-8x7B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-mixtral-instruct-8x7B) | 0.28B | 是 |\n| **LLaMA3-Instruct 8B** | [yuhuili\u002FEAGLE-LLaMA3-Instruct-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-LLaMA3-Instruct-8B) | 0.25B | 是 |\n| **LLaMA3-Instruct 70B** | [yuhuili\u002FEAGLE-LLaMA3-Instruct-70B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-LLaMA3-Instruct-70B) | 0.99B | 是 |\n| **Qwen2-7B-Instruct** | [yuhuili\u002FEAGLE-Qwen2-7B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Qwen2-7B-Instruct) | 0.26B | 是 |\n| **Qwen2-72B-Instruct** | [yuhuili\u002FEAGLE-Qwen2-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-Qwen2-72B-Instruct) | 1.05B | 是 |\n| **LLaMA3.1-Instruct 8B** | [yuhuili\u002FEAGLE-LLaMA3.1-Instruct-8B](https:\u002F\u002Fhuggingface.co\u002Fyuhuili\u002FEAGLE-LLaMA3.1-Instruct-8B) | 0.25B | 是 |\n| **Qwen2.5-14B-Instruct** | [Zjcxy-SmartAI\u002FEagle-Qwen2.5-14B-Instruct](https:\u002F\u002Fhuggingface.co\u002FZjcxy-SmartAI\u002FEagle-Qwen2.5-14B-Instruct) | 0.33B | 否 |\n\n\n## 推理\n我们提供的推理代码会自动分配模型权重（将模型加载到多个 GPU 上），使您能够运行超出单个 GPU 内存限制的模型。\n\n### 使用 UI\n我们提供了一个建议的 Web 界面，您可以通过运行以下命令来使用。模型完全加载后，终端会输出一个 URL，您可以在浏览器中输入该 URL 进行访问。\n```bash\npython -m eagle.application.webui --ea-model-path [EAGLE 权重路径]\\ \n\t\t--base-model-path [原始模型路径]\\\n\t\t--model-type [vicuna\\llama2\\llama3]\\\n        --total-token [int]\n```\n*total-token* 是草稿令牌的数量。对于较小的模型和较先进的 GPU，此值可以设置得更大。根据具体的设备和模型进行调整可以获得更好的效果。如果设置为 -1，EAGLE-2 将自动配置该参数。\n\n### 使用代码\n您可以使用我们提供的 \"eagenerate\" 来加速生成，就像使用 Hugging Face 的 'generate' 一样。以下是一个示例。\n```python\nfrom eagle.model.ea_model import EaModel\nfrom fastchat.model import get_conversation_template\nmodel = EaModel.from_pretrained(\n    base_model_path=base_model_path,\n    ea_model_path=EAGLE_model_path,\n    torch_dtype=torch.float16,\n    low_cpu_mem_usage=True,\n    device_map=\"auto\",\n    total_token=-1\n)\nmodel.eval()\nyour_message=\"Hello\"\nconv = get_conversation_template(\"vicuna\")\nconv.append_message(conv.roles[0], your_message)\nconv.append_message(conv.roles[1], None)\nprompt = conv.get_prompt()\ninput_ids=model.tokenizer([prompt]).input_ids\ninput_ids = torch.as_tensor(input_ids).cuda()\noutput_ids=model.eagenerate(input_ids,temperature=0.5,max_new_tokens=512)\noutput=model.tokenizer.decode(output_ids[0])\n```\n\n**_注意：Vicuna、LLaMA2-Chat 和 LLaMA3-Instruct 都是聊天模型。您需要使用正确的聊天模板，否则会导致模型输出异常并影响 EAGLE 的性能。_**\n\n\n\n## 训练\n\n```bash\ncd eagle\u002Ftraineagle3\ndeepspeed main.py --deepspeed_config ds_config.json\n```\n我们强烈建议使用 [SpecForge](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002FSpecForge) 来开箱即用地使用 SGLang 训练 EAGLE-3。\n\n\n\n\n### 自定义模型的推理\n\n如果原始的 LLM 结构与 LLaMA 和 Mixtral 不同，您可以按照以下方式使用 EAGLE：\n\n从 Transformers 库中复制 modeling_basemodelname.py 文件，并进行修改，以利用预分配的 kv_cache 来提升基础模型的速度。您可以参考 model\u002Fmodeling_llama_kv.py 文件获取指导，其中需要修改的地方都标有 # [MODIFIED]。这些修改非常少。\n\n\n## 评估\n您可以通过以下命令在 MT-bench 上测试 EAGLE 的速度。模型会自动下载，您可能需要通过 ```huggingface-cli login``` 输入您的 Hugging Face [访问令牌](https:\u002F\u002Fhuggingface.co\u002Fsettings\u002Ftokens)。\n```bash\npython -m eagle.evaluation.gen_ea_answer_llama3chat --ea-model-path yuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B --base-model-path meta-llama\u002FLlama-3.1-8B-Instruct --use_eagle3\n```\n\n```huggingface-cli login```.\n```bash\npython -m eagle.evaluation.gen_ea_answer_qwen3 --ea-model-path \u002Fworkspace\u002Fyunhai\u002FQwen3-4B_eagle3 --base-model-path Qwen\u002FQwen3-4B --use_eagle3\n```\n如果您需要具体的加速比，还需要运行以下命令来获取普通自回归的速度。\n```bash\npython -m eagle.evaluation.gen_baseline_answer_llama3chat --ea-model-path yuhuili\u002FEAGLE3-LLaMA3.1-Instruct-8B --base-model-path meta-llama\u002FLlama-3.1-8B-Instruct\n```\n以上两个命令会分别生成一个 .jsonl 文件，记录生成结果和实际耗时。然后，您可以使用 evaluation\u002Fspeed.py 来计算速度比。\n\n## 🌟 我们的贡献者\n\n衷心感谢所有贡献者。\n\n![Contributors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_readme_b58674d4dad7.png)\n\n\n## 参考文献\n有关技术细节和完整实验结果，请参阅 [EAGLE 论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.15077.pdf)、[EAGLE-2 论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2406.16858) 和 [EAGLE-3 论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2503.01840)。\n```\n@inproceedings{li2024eagle, \n\tauthor = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang}, \n\ttitle = {{EAGLE}: Speculative Sampling Requires Rethinking Feature Uncertainty}, \n\tbooktitle = {国际机器学习会议},\n\tyear = {2024}\n}\n@inproceedings{li2024eagle2, \n\tauthor = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang}, \n\ttitle = {{EAGLE-2}: Faster Inference of Language Models with Dynamic Draft Trees}, \n\tbooktitle = {自然语言处理中的经验方法},\n\tyear = {2024}\n}\n@inproceedings{li2025eagle3,\n    author = {Yuhui Li and Fangyun Wei and Chao Zhang and Hongyang Zhang},\n    title = {{EAGLE-3}: Scaling up Inference Acceleration of Large Language Models via Training-Time Test}, \n    booktitle = {神经信息处理系统年度会议},\n    year = {2025}\n}\n```\n\n## 致谢\n\n本项目受到了大语言模型社区中许多优秀项目的启发，例如 [Medusa](https:\u002F\u002Fgithub.com\u002FFasterDecoding\u002FMedusa)、[FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat) 等。项目标志由 GPT-4 设计。我们还要感谢与 SGLang 团队（James Liu、Ke Bao、Yineng Zhang、Lianmin Zheng、Ying Sheng 等多位成员）、Tianle Cai、Hao Zhang、Ziteng Sun 等人的诸多宝贵讨论。","# EAGLE 快速上手指南\n\nEAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) 是一种用于大语言模型（LLM）快速解码的推测性采样技术。它通过外推 LLM 的第二顶层上下文特征向量，在保持生成文本分布一致性的前提下，显著提升推理速度。最新版本的 **EAGLE-3** 相比普通解码速度提升可达 **5.6 倍**。\n\n## 环境准备\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+)\n*   **Python**: 3.8 - 3.11\n*   **GPU**: 支持 CUDA 的 NVIDIA GPU (推荐 RTX 3090 或更高，显存需满足目标模型需求)\n    *   *注：EAGLE-3 训练可在 8x RTX 3090 上完成，推理仅需单卡或双卡。*\n*   **依赖库**: PyTorch, Transformers, Accelerate 等 (将通过 `requirements.txt` 自动安装)\n\n## 安装步骤\n\n1.  **克隆仓库**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE.git\n    cd EAGLE\n    ```\n\n2.  **创建虚拟环境并激活**\n    ```bash\n    python -m venv ~\u002Fvenvs\u002Fea_env\n    source ~\u002Fvenvs\u002Fea_env\u002Fbin\u002Factivate\n    ```\n\n3.  **安装依赖**\n    *国内用户建议使用清华或阿里镜像源加速安装：*\n    ```bash\n    pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n当前主分支默认使用 **EAGLE-3** 实现。使用前请确保已下载对应的 EAGLE 权重模型（见下文模型列表）。\n\n### 1. 获取模型权重\nEAGLE 需要配合特定的“草稿模型”权重使用。你可以从 Hugging Face 下载官方支持的模型，例如针对 Vicuna-13B 的 EAGLE-3 模型：\n*   基座模型：`lmsys\u002Fvicuna-13b-v1.3`\n*   EAGLE-3 权重：`yuhuili\u002FEAGLE3-Vicuna1.3-13B`\n\n### 2. 代码推理示例\n以下是最简单的 Python 推理脚本示例：\n\n```python\nimport torch\nfrom eagle.model import EaModel\nfrom transformers import AutoTokenizer\n\n# 配置路径\nbase_model_path = \"lmsys\u002Fvicuna-13b-v1.3\"       # 原始大模型路径\neagle_model_path = \"yuhuili\u002FEAGLE3-Vicuna1.3-13B\" # EAGLE 权重路径\n\n# 加载 Tokenizer\ntokenizer = AutoTokenizer.from_pretrained(base_model_path)\n\n# 加载 EAGLE 模型\n# use_eagle3=True 为默认值，若需使用旧版 EAGLE-1 请设为 False\nmodel = EaModel.from_pretrained(\n    base_model_path=base_model_path,\n    eagle_model_path=eagle_model_path,\n    torch_dtype=torch.float16, # Qwen2 系列建议改为 torch.bfloat16 以防溢出\n    device_map=\"auto\"\n)\n\n# 准备输入\ninput_text = \"Once upon a time,\"\ninputs = tokenizer(input_text, return_tensors=\"pt\").to(model.device)\n\n# 执行推理\n# EAGLE 会自动处理推测性解码过程\noutputs = model.generate(\n    **inputs,\n    max_new_tokens=100,\n    temperature=0.7,\n    top_p=0.9,\n    do_sample=True\n)\n\n# 输出结果\nresult = tokenizer.decode(outputs[0], skip_special_tokens=True)\nprint(result)\n```\n\n### 3. 启动交互式 UI (可选)\n项目自带简单的 Web UI 用于测试：\n\n```bash\npython demo.py --base-model lmsys\u002Fvicuna-13b-v1.3 --eagle-model yuhuili\u002FEAGLE3-Vicuna1.3-13B\n```\n运行后在浏览器访问显示的本地地址即可进行对话测试。\n\n> **提示**：若需使用非官方支持的模型或自定义模型，请参考仓库中的 `Train` 章节进行微调训练。对于生产环境部署，推荐结合 **vLLM** 或 **SGLang** 框架使用，以获得更佳的并发性能。","某初创团队正在开发一款基于 13B 参数大模型的实时智能客服系统，需要在有限的消费级显卡资源下支撑高并发对话请求。\n\n### 没有 EAGLE 时\n- **响应延迟高**：采用传统自回归解码，用户提问后需等待数秒才能看到完整回复，严重影响交互体验。\n- **硬件成本高昂**：为达到可接受的并发量，被迫租用昂贵的 A100 集群，初创资金难以负荷。\n- **部署门槛高**：现有的加速方案往往需要复杂的并行策略或特定的高端硬件，小团队缺乏调优能力。\n- **生成质量妥协**：尝试过其他投机采样方法，但常出现语句不通顺或与原模型分布不一致的“幻觉”问题。\n\n### 使用 EAGLE 后\n- **速度显著提升**：利用 EAGLE-3 技术，在 2 张 RTX 3090 上实现了比传统解码快 5.6 倍的生成速度，对话几乎零延迟。\n- **低成本高性能**：无需升级硬件，仅用消费级显卡即可跑出超越单卡 A100 的推理性能，大幅降低运营成本。\n- **无损一致性**：EAGLE 从数学上保证了生成文本分布与原模型完全一致，确保了客服回答的专业性和准确性。\n- **易于集成落地**：直接兼容 vLLM 等主流框架，团队在一天内即可完成训练与部署，快速上线业务。\n\nEAGLE 让资源受限的团队也能在低成本硬件上享受极致的推理加速，同时严格守住大模型的生成质量底线。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSafeAILab_EAGLE_41a0ecd6.jpg","SafeAILab","SafeAI Lab (SAIL)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSafeAILab_8d9a4ca5.png","We are an AI lab with a focus on making AI safer and accessible to everyone, led by Hongyang Zhang.",null,"h935zhan@uwaterloo.ca","https:\u002F\u002Fhongyanz.github.io\u002F","https:\u002F\u002Fgithub.com\u002FSafeAILab",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,2256,269,"2026-04-02T18:10:08","NOASSERTION","未说明","需要 NVIDIA GPU。训练和测试环境示例为 8x RTX 3090；推理示例为 2x RTX 3090 (fp16) 或 2x RTX 3060。支持 AMD ROCm 和 AWS NeuronX。具体显存需求取决于基座模型大小（如 13B 模型需多卡或大显存），未明确最低显存阈值。",{"notes":95,"python":96,"dependencies":97},"1. 该工具主要用于加速大语言模型推理（投机采样），需配合特定的基座模型（如 Vicuna, LLaMA, Qwen 等）及对应的 EAGLE 权重使用。\n2. 官方推荐使用 SpecForge 进行 EAGLE-3 的开箱即用训练。\n3. 当基座模型为 Qwen2 时，必须使用 bf16 精度而非 fp16 以避免数值溢出。\n4. 默认主分支为 EAGLE-3 和 EAGLE-2 实现，若需使用 EAGLE-1 请切换至 v1 分支。\n5. 支持与 vLLM, DeepSpeed, FlashAttention 等技术结合使用。","未说明 (需支持 venv 及 requirements.txt 中的依赖)",[98,99,100,101,102],"torch","transformers","accelerate","vllm (可选集成)","sglang (可选集成)",[26,13],[105,106,107],"large-language-models","llm-inference","speculative-decoding","2026-03-27T02:49:30.150509","2026-04-06T05:17:22.366041",[111,116,121,126,131,136],{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},12382,"如何训练 EAGLE-3 模型？代码库中似乎缺少相关的训练脚本。","EAGLE-3 的训练代码已经发布。请检查项目仓库的最新更新，训练脚本现已可用。如果您需要针对特定模型（如 Qwen）进行适配，可能需要进一步修改 `cnets.py` 等文件以兼容不同架构。","https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\u002F194",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},12383,"EAGLE-3 论文中提到的“接受率”（Acceptance Rate）具体是如何定义和计算的？","接受率应定义为草稿模型（draft model）在特定 token 位置上的预测准确率（例如，对所有执行中位置 1 的预测取平均）。它不是简单的整体接受比例，而是针对特定生成位置的准确性度量。请注意，论文图表中的横坐标标签曾存在笔误，已在最新的 arXiv 版本中修正。","https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\u002F195",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},12384,"如何在项目中贡献针对多批次（Multi-batch）推理优化的代码或创建新分支？","项目维护者欢迎社区贡献。如果您有针对 EAGLE-1\u002F2\u002F3 的多批次优化算法（如动态调整草稿树参数），可以直接联系维护者（例如通过微信 hongyanzha）讨论创建新分支的事宜。贡献者通常需要提供清理后的代码、性能评估报告以及算法说明，维护者会协助将其合并到项目的特定分支中。","https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\u002F234",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},12385,"如何使 EAGLE 支持 Qwen2 模型的推理和训练？","目前训练代码已发布，但原生代码可能尚未完全适配 Qwen2 架构。用户需要手动修改 `cnets.py` 等相关代码文件，使其兼容 Qwen 模型的结构。建议参考已有的 Llama 系列实现进行迁移，并关注社区中关于 Qwen2 适配的具体讨论和补丁。","https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\u002F225",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},12386,"为什么我复现的 EAGLE-1\u002F2 接受长度（accept_length）结果偏低（约为 2），是代码有 Bug 吗？","这通常不是代码 Bug，而是测试模式的问题。EAGLE-2 的非重复采样模式（non-repeat sampling）下 q(x)=1.0 是一种特殊情况。如果您直接使用 `gen_ea_alpha_vicuna.py` 进行测试而未启用链式推测（chain speculation），得到的结果可能无法反映论文中的最佳性能。建议阅读相关推测解码论文，并确保测试配置（如树推测结构）与论文实验设置一致。","https:\u002F\u002Fgithub.com\u002FSafeAILab\u002FEAGLE\u002Fissues\u002F95",{"id":137,"question_zh":138,"answer_zh":139,"source_url":115},12387,"EAGLE-3 的训练损失函数具体发生了什么变化？是否移除了特征预测？","EAGLE-3 的核心改进在于训练策略，而非简单的损失函数公式变更。其本质是通过移位输入来获取多个草稿模型预测的 logits，然后计算与目标模型 logits 的交叉熵损失。虽然代码中看似仍是标准的 next token 损失，但这实际上是在同一输入上进行了 logit 级别的数据增强，类似于知识蒸馏中的技术，旨在让模型学习比较多个输出 token 而非单一的特征预测。",[]]