[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-horseee--LLM-Pruner":3,"tool-horseee--LLM-Pruner":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":102,"env_deps":104,"category_tags":111,"github_topics":112,"view_count":10,"oss_zip_url":126,"oss_zip_packed_at":126,"status":16,"created_at":127,"updated_at":128,"faqs":129,"releases":158},646,"horseee\u002FLLM-Pruner","LLM-Pruner","[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3\u002F3.1, Llama-2, LLaMA,  BLOOM, Vicuna, Baichuan, TinyLlama, etc.","LLM-Pruner 是一款专注于大语言模型结构剪枝的开源项目，致力于将庞大的语言模型压缩至任意尺寸。针对大模型部署成本高、推理速度慢的痛点，LLM-Pruner 通过结构剪枝技术，在显著减少模型参数的同时，有效保留其原有的多任务处理能力。\n\nLLM-Pruner 的一大亮点在于极高的效率与低资源门槛。用户仅需约 5 万条公开样本进行后训练，便可在数分钟内完成剪枝，数小时内恢复模型性能。LLM-Pruner 广泛支持 Llama 系列（含最新的 Llama 3\u002F3.1）、BLOOM、Vicuna、Baichuan 等主流架构，并持续更新以适配新特性如 GQA。\n\n无论是希望降低推理成本的开发者，还是探索模型压缩策略的研究人员，LLM-Pruner 都能提供自动化且便捷的解决方案，大幅减少人工干预，助力高效构建轻量级大模型应用。","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_fac030ae574d.png\" width=\"20%\"> \u003Cbr>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\u003Ch1>LLM-Pruner\u003C\u002Fh1>\n  \u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\n    \u003Cimg alt=\"License: Apache 2.0\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-4E94CE.svg\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpytorch.org\u002F\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPyTorch-%3E=v1.7.1-EE4C2C.svg?style=flat-square\" alt=\"PyTorch>=v1.7.1\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-LLaMA-FFB000.svg?style=flat-square\" alt=\"LLaMA\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Llama2-FAB093.svg?style=flat-square\" alt=\"Llama-2\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Llama3&3.1-7CC217.svg?style=flat-square\" alt=\"Llama-3\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Vicuna-924E7D.svg?style=flat-square\" alt=\"Vicuna\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fbloom\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-BLOOM-1A63BD.svg?style=flat-square\" alt=\"BLOOM\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTHUDM\u002FChatGLM-6B\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-chatGLM-6082B6.svg?style=flat-square\" alt=\"chatGLM\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbaichuan-inc\u002FBaichuan-7B\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Baichuan-18ac62.svg?style=flat-square\" alt=\"Baichuan\">\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Ch3>On the Structural Pruning of Large Language Models\u003Ch3>\n:llama: :llama: :llama: :llama: :llama: Compress your LLMs to any size! :llama: :llama: :llama: :llama: :llama:\n\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n\u003Cimg width=\"100%\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_403fcd99a87a.png\">    \n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_76fc3d6e1378.png\" width=\"100%\"> \u003Cbr>\n\u003C\u002Fp>\n\n\n## Introduction\n  \n> **[LLM-Pruner: On the Structural Pruning of Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11627)** [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11627)   \n> *Xinyin Ma, Gongfan Fang, Xinchao Wang*   \n> *National University of Singapore*  \n\n#### Why LLM-Pruner\n- [x] **Task-agnostic compression**: The compressed LLM should retain its original ability as a multi-task solver. \n- [x] **Less training corpus**: In this work, we use only 50k publicly available samples (alpaca) to post-train the LLM.  \n- [x] **Efficient compression**: 3 minutes for pruning and 3 hours for post-training. (You can make it longer)\n- [x] **Automatic structural pruning**: Pruning new LLMs with minimal human effort (In progress).\n\n#### Supported LLMs:\n- [x] [Llama-3.1](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmeta-llama\u002Fllama-31-669fc079a0c406a149a5738f)\n- [x] [Llama-3](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmeta-llama\u002Fmeta-llama-3-66214712577ca38149ebb2b6)\n- [x] [Llama-2](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#1-pruning-discovery-stage--estimation-stage)\n- [x] [LLaMA](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#1-pruning-discovery-stage--estimation-stage)\n- [x] [BLOOM](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#cherry_blossom-bloom) \n- [x] [Vicuna](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#llama-vicuna-pruning)\n- [x] [Baichuan](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#llama-baichuan-pruning)\n- [x] [TinyLlama](https:\u002F\u002Fgithub.com\u002Fjzhang38\u002FTinyLlama) \n\n#### Updates:\n* July 27, 2024: :rocket: Support GQA! Now LLM-Pruner can work on Llama3 and Llama 3.1. We are still testing the pruning results of new LLMs (Llama3, Llama3.1, Gemma) and you can find the pruning results [here](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fmore_results#more-results).\n* August 30, 2023: LLM-Pruner now supports [BLOOM](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fbloom) :cherry_blossom:\n* August 14, 2023:  [Code](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#2-post-training-recover-stage) and [results](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#2-post-training-recover-stage) for finetuning with a large-scale corpus are now available. The fine-tuned LLaMA-5.4B model achieves an average accuracy of 62.36%, closely approaching the original LLaMA-7B (63.25%).\n* July 19, 2023: :fire:  LLM-Pruner now supports Llama-2-7b and Llama-2-13b (the huggingface version) \n* July 18, 2023: :rocket: Support [Baichuan](https:\u002F\u002Fgithub.com\u002Fbaichuan-inc\u002FBaichuan-7B), a bilingual LLM.\n* May 20, 2023: :tada: Code and Preprint Paper released! \n\n#### TODO List:\n- [ ] A tutorial for pruning new LLMs.\n- [ ] Support `.from_pretrained()` for loading the model.\n\n#### **Contact Us:**\nJoin our WeChat group for a chat:\n  * WeChat Group [Group-2](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F3fe4c487-5a5b-43fd-bf64-a5ee62c3dec1)  (>200\u002F500), [Group-1](https:\u002F\u002Fgithub.com\u002FVainF\u002FTorch-Pruning\u002Fassets\u002F18592211\u002F35d66130-eb03-4dcb-ad75-8df784460ad3) (500\u002F500, FULL).\n\n\n\n## Table of Contents\n  - [Quick Start](#quick-start)\n  - [Step-by-step Instructions](#step-by-step-instructions)\n  - [Zero-shot Evaluation](#zero-shot-evaluation)\n  - [More-Examples](#more-examples)\n  - [Version Information](#version-information)\n  - [Limitations](#limitations)\n  - [Acknowledgement](#acknowledgement)\n  - [Citation](#citation)\n\n## Quick Start\n\n### Installation\n```\npip install -r requirement.txt\n```\n\n### Minimal Example\n```\nbash script\u002Fllama_prune.sh\n```\nThis script would compress the LLaMA-7B model with ～20\\% parameters pruned. All the pre-trained models and the dataset would be automatically downloaded, so you do not need to manually download the resource. When running this script for the first time, it will require some time to download the model and the dataset.\n\n    \n## Step-by-step Instructions  \n    \nIt takes three steps to prune an LLM:\n* \u003Cu>Discovery Stage\u003C\u002Fu>: Discover the complicated inter-dependency in LLMs and find the minimally-removable unit, **group**.\n* \u003Cu>Estimation Stage\u003C\u002Fu>: Estimate the contribution of each group to the overall performance of the model and decide which group to prune. \n* \u003Cu>Recover Stage\u003C\u002Fu>: Fast post-training to recover model performance.\n  \nAfter pruning and post-training, we follow \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness\">lm-evaluation-harness\u003C\u002Fa> for evaluation.\n    \n### 1. Pruning (Discovery Stage + Estimation Stage)\n    \n:llama: **LLaMA\u002FLlama-2 pruning with ~20% parameters pruned:**\n```\npython hf_prune.py --pruning_ratio 0.25 \\\n      --block_wise \\\n      --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n      --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n      --pruner_type taylor \\\n      --test_after_train \\\n      --device cpu  --eval_device cuda \\\n      --save_ckpt_log_name llama_prune \n```\nArguments:\n- ``Base model``: Choose the base model from LLaMA or Llama-2 and pass the `pretrained_model_name_or_path` to `--base_model`. The model name is used for `AutoModel.from_pretrained` to load the pre-trained LLM. For example, if you want to use the llama-2 with 13 billion parameters, then pass `meta-llama\u002FLlama-2-13b-hf` to `--base_model`.\n- ``Pruning Strategy``: Choose between block-wise, channel-wise, or layer-wise pruning using the respective command options: {--block_wise}, {--channel_wise}, {--layer_wise --layer NUMBER_OF_LAYERS}. For block-wise pruning, specify the start and end layers to be pruned. Channel-wise pruning does not require extra arguments. For layer pruning, use --layer NUMBER_OF_LAYERS to specify the desired number of layers to be kept after pruning.\n- ``Importance Criterion``: Select from l1, l2, random, or taylor using the --pruner_type argument. For the taylor pruner, choose one of the following options: vectorize, param_second, param_first, param_mix. By default, param_mix is used, which combines approximated second-order hessian and first-order gradient. If using l1, l2, or random, no extra arguments are required.\n- ``Pruning Ratio``: Specifies the pruning ratio of groups. It differs from the pruning rate of parameters, as groups are removed as the minimal units.\n- ``Device`` and ``Eval_device``: Pruning and evaluation can be performed on different devices. Taylor-based methods require backward computation during pruning, which may require significant GPU RAM. Our implementation uses the CPU for importance estimation (also supports GPU, simply use --device cuda). eval_device is used to test the pruned model.\n \n\n\n#### :llama: Vicuna Pruning\n\n\u003Cdetails>\n\u003Csummary>Details:\u003C\u002Fsummary>\n  \nIf you want to try Vicuna, please specify the argument `--base_model` to the path to vicuna weight. Please follow \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\">https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u003C\u002Fa> to get Vicuna weights.\n```\npython hf_prune.py --pruning_ratio 0.25 \\\n      --block_wise \\\n      --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n      --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n      --pruner_type taylor \\\n      --test_after_train \\\n      --device cpu  --eval_device cuda \\\n      --save_ckpt_log_name llama_prune \\\n      --base_model PATH_TO_VICUNA_WEIGHTS\n```\n\n\u003C\u002Fdetails>\n\n\n#### :llama: Baichuan Pruning\n\n\u003Cdetails>\n\u003Csummary>Details:\u003C\u002Fsummary>\n  \nPlease refer to the [Example\u002FBaichuan](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#llama-baichuan-pruning) for more details\n\n\u003C\u002Fdetails>\n\n#### :llama: Llama3\u002FLlama3.1 Pruning\n\n\u003Cdetails>\n\u003Csummary>Details:\u003C\u002Fsummary>\n  \n```\npython llama3.py --pruning_ratio 0.25 \\\n                 --device cuda --eval_device cuda \\\n                 --base_model meta-llama\u002FMeta-Llama-3-8B-Instruct \\\n                 --block_wise --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n                 --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n                 --save_ckpt_log_name llama3_prune \\\n                 --pruner_type taylor --taylor param_first \\\n                 --max_seq_len 2048 \\\n                 --test_after_train --test_before_train --save_model \n```\n\n\u003C\u002Fdetails>\n    \n### 2. Post-Training (Recover Stage)\n\n* Train using Alpaca with 50,000 samples. Here's an example of training on a single GPU:\n```\nCUDA_VISIBLE_DEVICES=X python post_training.py --prune_model prune_log\u002FPATH_TO_PRUNE_MODEL\u002Fpytorch_model.bin \\\n      --data_path yahma\u002Falpaca-cleaned \\\n      --lora_r 8 \\\n      --num_epochs 2 \\ \n      --learning_rate 1e-4 \\ \n      --batch_size 64 \\\n      --output_dir tune_log\u002FPATH_TO_SAVE_TUNE_MODEL \\ \n      --wandb_project llama_tune\n```\nMake sure to replace `PATH_TO_PRUNE_MODEL` with the path to the pruned model in step 1, and replace `PATH_TO_SAVE_TUNE_MODEL` with the desired location where you want to save the tuned model.\n\n**Tip**: [Training LLaMA-2 in float16 is not recommended and is known to produce nan; as such, the model should be trained in bfloat16.](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fllama2#usage-tips)\n\n* Train using [MBZUAI\u002FLaMini-instruction](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FMBZUAI\u002FLaMini-instruction) with 2.59M samples. Here is an example using multiple gpus for training:\n```\ndeepspeed --include=localhost:1,2,3,4 post_training.py \\\n      --prune_model prune_log\u002FPATH_TO_PRUNE_MODEL\u002Fpytorch_model.bin \\\n      --data_path MBZUAI\u002FLaMini-instruction  \\\n      --lora_r 8 \\\n      --num_epochs 3  \\\n      --output_dir tune_log\u002FPATH_TO_SAVE_TUNE_MODEL \\\n      --extra_val_dataset wikitext2,ptb \\\n      --wandb_project llmpruner_lamini_tune \\\n      --learning_rate 5e-5 \\\n      --cache_dataset\n```\n\n### 3. Generation\n\n#### How to load pruned\u002Fpre-trained models:\n\nFor the pruned model, simply use the following command to load your model. \n``` \n  pruned_dict = torch.load(YOUR_CHECKPOINT_PATH, map_location='cpu')\n  tokenizer, model = pruned_dict['tokenizer'], pruned_dict['model']\n```\nDue to the different configurations between modules in the pruned model, where certain layers may have larger width while others have undergone more pruning, it becomes impractical to load the model using the `.from_pretrained()` as provided by Hugging Face. Currently, we employ the `torch.save` to store the pruned model.\n  \nSince the pruned model has different configuration in each layer, like some layers might be wider but some layers have been pruned more, the model cannot be loaded with the `.from_pretrained()` in Hugging Face. Currently, we simply use the `torch.save` to save the pruned model and `torch.load` to load the pruned model.\n  \n#### Generation with Gradio Interface\nWe provide a simple script to geneate texts using pre-trained \u002F pruned models \u002F pruned models with post-training. \n    \n* LLaMA-7B Pre-trained\n```\npython generate.py --model_type pretrain\n```\n* Pruned Model without Post-Training\n```\npython generate.py --model_type pruneLLM --ckpt \u003CYOUR_MODEL_PATH_FOR_PRUNE_MODEL>\n```\n* Pruned Model with Post-Training \n```\npython generate.py --model_type tune_prune_LLM --ckpt \u003CYOUR_CKPT_PATH_FOR_PRUNE_MODEL> --lora_ckpt \u003CYOUR_CKPT_PATH_FOR_LORA_WEIGHT>\n```\n\nThe above instructions will deploy your LLMs locally. \n  \n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_1c381262caee.png\" width=\"100%\">\u003C\u002Fimg>\n\u003C\u002Fdiv>\n\n\n### 4. Evaluation\nFor evaluating the performance of the pruned model, we follow [lm-evaluation-harness](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness) to evaluate the model:\n* Step 1: If you only need to evaluate the pruned model, then skip this step and jump to Step 2.\nThis step is to arrange the files to satisfy the input requirement for `lm-evaluation-harness`. The [tuned checkpoint from the post-training step](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#2-post-training-recover-stage) would be save in the following format:\n```\n- PATH_TO_SAVE_TUNE_MODEL\n  | - checkpoint-200\n      | - pytorch_model.bin\n      | - optimizer.pt\n      ...\n  | - checkpoint-400\n  | - checkpoint-600\n  ...\n  | - adapter_config.bin\n  | - adapter-config.json\n```\nArrange the files by the following commands:\n```\ncd PATH_TO_SAVE_TUNE_MODEL\nexport epoch=YOUR_EVALUATE_EPOCH\ncp adapter_config.json checkpoint-$epoch\u002F\nmv checkpoint-$epoch\u002Fpytorch_model.bin checkpoint-$epoch\u002Fadapter_model.bin\n```\nIf you want to evaluate the `checkpoint-200`, then set the epoch equalts to 200 by `export epoch=200`.\n\n\n* Step 2:\n```\nexport PYTHONPATH='.'\npython lm-evaluation-harness\u002Fmain.py --model hf-causal-experimental \\\n       --model_args checkpoint=PATH_TO_PRUNE_MODEL,peft=PATH_TO_SAVE_TUNE_MODEL,config_pretrained=PATH_OR_NAME_TO_BASE_MODEL \\\n       --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \\\n       --device cuda:0 --no_cache \\\n       --output_path PATH_TO_SAVE_EVALUATION_LOG \n```\nHere, replace `PATH_TO_PRUNE_MODEL` and `PATH_TO_SAVE_TUNE_MODEL` with the path you save the pruned model and the tuned model, and `PATH_OR_NAME_TO_BASE_MODEL` is for loading the configuration file of the base model. \n\n[Update]: We upload a script to simply the evaluation process if you want to evaluate the pruned model with the tuned checkpoint. Simply use the following command:\n```\nCUDA_VISIBLE_DEVICES=X bash scripts\u002Fevaluate.sh PATH_OR_NAME_TO_BASE_MODEL PATH_TO_SAVE_TUNE_MODEL  PATH_TO_PRUNE_MODEL EPOCHS_YOU_WANT_TO_EVALUATE\n```\nReplace the necessary information of your model in the command. The final one is used to iterate over different epochs if you want to evaluate several checkpoints in one command. For example:\n```\nCUDA_VISIBLE_DEVICES=1 bash scripts\u002Fevaluate.sh decapoda-research\u002Fllama-7b-hf tune_log\u002Fllama_7B_hessian prune_log\u002Fllama_prune_7B 200 1000 2000\n```\n\n\n### 5. Testing MACs, Params and Memory\n\n* Pre-trained\n```\npython test_speedup.py --model_type pretrain\n```\n* Pruned Model\n```\npython test_speedup.py --model_type pruneLLM --ckpt \u003CYOUR_MODEL_PATH_FOR_PRUNE_MODEL>\n```\n\n## Zero-shot Evaluation\n\nA brief quantitative results for LLaMA-7B:\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_ad406ed42d87.png\" width=\"100%\"> \u003Cbr>\n\u003C\u002Fp>\n    \nThe results for Vicuna-7B:\n    \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_7a6cf4c94edb.png\" width=\"100%\"> \u003Cbr>\n\u003C\u002Fp>\n    \nThe results for ChatGLM-6B:\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_0ac185972578.png\" width=\"80%\"> \u003Cbr>\n\u003C\u002Fp>\n\nStatistics for pruned models:\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_aaec1afcf26d.png\" width=\"50%\"> \u003Cbr>\n\u003C\u002Fp>\n\nResults of LLM-Pruner with 2.59M samples:\n| Pruning Ratio | #Param | Memory     | Latency | Speedup | BoolQ | PIQA  | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA  | Average |\n|---------------|--------|------------|---------|---------|-------|-------|-----------|------------|-------|-------|-------|---------|\n| LLaMA-7B      | 6.74B  | 12884.5MiB | 69.32s  | 1x      | 73.18 | 78.35 | 72.99     | 67.01      | 67.45 | 41.38 | 42.40 | 63.25   |\n| LLaMA-5.4B with Alpaca(50k)    | 5.47B  | 10488.4MiB | 58.55s  | 1.18x   | 64.62 | 77.20 | 68.80     | 63.14      | 64.31 | 36.77 | 39.80 | 59.23   |\n| LLaMA-5.4B with LaMini(2.59M)  | 5.47B  | 10488.4MiB | 58.55s  | 1.18x   | 76.57 | 77.37 | 66.60     | 65.82      | 70.62 | 40.70 | 38.80 | 62.36   |\n\n\nMore results can be found in the paper. \n\n## More Examples\n\n| Original Model | Output | Pruned Model |  Output |\n|--|--|--|--| \n| LLaMa-7B | \u003Cu>The universe is the entirety of space, time, matter, and energy that exists.\u003C\u002Fu> It includes all matter and energy, whether visible or not, and in fact the whole universe is invisible. There are many theories regarding its origin and ultimate fate... | LLaMa-5.4B | \u003Cu>The universe is the entirety of space, time, matter, and energy that exists.\u003C\u002Fu> The laws that make up physics — electrons orbiting atoms, magnets attracting other objects, planets orbiting the Sun—have been consistent over ...|\n| Vicuna-7B | \u003Cu>10 steps to build an ios app:\u003C\u002Fu> 1. Research and develop the idea. 2. Decide on the platform you want to use (iOS, Android, or both). 3. Create a plan and budget. 4. Hire a team of developers. 5. Design the user interface (UI). 6. Develop the user interface (UI). 7. Test and iterate on the app. 8. Submit the app to the app store. 9. Optimize the app for performance and user experience (UX). 10. Market and promote the app. | Vicuna-5.4B | \u003Cu>10 steps to build an ios app:\u003C\u002Fu> 1. Plan your app: decide what it should do, what it will look like, and what it will cost to build it. 2. Hire a developer: A good iOS developer will help you design the user interface, create the app, and make sure it’s bug-free. 3. Create a user interface: This is where the magic happens. Your app’s user interface will be designed and mocked up at this point... |\n| ChatGLM-6B | \u003Cu>Write a poetry about love and peace.\u003C\u002Fu> In the midst of life's hustle and bustle, I find your love a soothing peace. It is a feeling that cannot be conveyed, A connection that cannot be distanced. With your smile, your eyes, your embrace, I feel a sense of belonging that only you can give. It is a love that lasts forever, A love that cannot be defeat... | ChatGLM-5.6B | \u003Cu>Write a poetry about love and peace.\u003C\u002Fu> Love and peace, two such wonderful things\\ A pure and endless desire in my heart\\ And both of them, I must seek for\\ A long, long time, I know..\\ Love, I know, is a feeling of being\\ A perfect partner, in every sense\\ And peace, I need it, so much, one day\\ A long, long way, my heart will go..|\n\n## Version Information\nDue to changes in the versions of models and repos used in this project, we listed some known version issues and the specific versions needed to reproduce our method:\n1. lm-eval-harness: We use [this commit](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness\u002Ftree\u002F4d21ab6b2713cc3a8b4fa7574e89c62ef504e75f) of lm-evaluation-harness, and the code is also included in this repo. Please check [Issue #25](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Fissues\u002F25) for details.\n2. LLaMA1-7B: We use the checkpoint of [decapoda-research\u002Fllama-7b-hf](https:\u002F\u002Fhuggingface.co\u002Fdecapoda-research\u002Fllama-7b-hf) in our experiments, which is not available now. Please consider using the copied version, e.g.,[baffo32\u002Fdecapoda-research-llama-7B-hf](https:\u002F\u002Fhuggingface.co\u002Fbaffo32\u002Fdecapoda-research-llama-7B-hf).\n\n\n## Limitations\n* Although we only used 50K data and trained for three hours, more data would definitely be better. We are testing on this.\n* The current compressed model still has several issues, such as generating repetitive tokens or producing nonsensical sentences. We believe there is significant room for improvement in the quality of the compressed model.\n* There are still some models for which we cannot automatically identify the mapping of indexes after concatenation and view operations. Therefore, we need to perform additional manual operations. \n\n\n## Acknowledgement\n* Logo is generated by \u003Ca href=\"https:\u002F\u002Fdreamstudio.ai\u002Fgenerate\">Stable Diffusion\u003C\u002Fa>\n* The evaluation of the LLM:  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness\">lm-evaluation-harness\u003C\u002Fa>\n* LLaMA: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\"> https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\u003C\u002Fa>\n* Vicuna: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\">https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u003C\u002Fa>\n* Peft: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpeft\">https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpeft\u003C\u002Fa>\n* Alpaca-lora: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftloen\u002Falpaca-lora\">https:\u002F\u002Fgithub.com\u002Ftloen\u002Falpaca-lora\u003C\u002Fa>\n\n## Citation\nIf you find this project useful, please cite\n```\n@inproceedings{ma2023llmpruner,\n  title={LLM-Pruner: On the Structural Pruning of Large Language Models},\n  author={Xinyin Ma and Gongfan Fang and Xinchao Wang},\n  booktitle={Advances in Neural Information Processing Systems},\n  year={2023},\n}\n```\n```\n@article{fang2023depgraph,\n  title={DepGraph: Towards Any Structural Pruning},\n  author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},\n  journal={The IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n  year={2023}\n}\n```\n","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_fac030ae574d.png\" width=\"20%\"> \u003Cbr>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n\u003Ch1>LLM-Pruner\u003C\u002Fh1>\n  \u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\n    \u003Cimg alt=\"License: Apache 2.0\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-4E94CE.svg\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpytorch.org\u002F\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPyTorch-%3E=v1.7.1-EE4C2C.svg?style=flat-square\" alt=\"PyTorch>=v1.7.1\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-LLaMA-FFB000.svg?style=flat-square\" alt=\"LLaMA\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Llama2-FAB093.svg?style=flat-square\" alt=\"Llama-2\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Llama3&3.1-7CC217.svg?style=flat-square\" alt=\"Llama-3\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Vicuna-924E7D.svg?style=flat-square\" alt=\"Vicuna\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fbloom\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-BLOOM-1A63BD.svg?style=flat-square\" alt=\"BLOOM\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTHUDM\u002FChatGLM-6B\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-chatGLM-6082B6.svg?style=flat-square\" alt=\"chatGLM\">\n  \u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fbaichuan-inc\u002FBaichuan-7B\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLLMs-Baichuan-18ac62.svg?style=flat-square\" alt=\"Baichuan\">\n  \u003C\u002Fa>\n\u003C\u002Fdiv>\n\u003Ch3>关于大语言模型 (LLM) 的结构化剪枝 (Structural Pruning)\u003C\u002Fh3>\n:llama: :llama: :llama: :llama: :llama: 将您的 LLM 压缩至任意规模！:llama: :llama: :llama: :llama: :llama:\n\u003C\u002Fdiv>\n\n\u003Cp align=\"center\">\n\u003Cimg width=\"100%\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_403fcd99a87a.png\">    \n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_76fc3d6e1378.png\" width=\"100%\"> \u003Cbr>\n\u003C\u002Fp>\n\n\n## 简介\n  \n> **[LLM-Pruner: On the Structural Pruning of Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11627)** [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11627)   \n> *Xinyin Ma, Gongfan Fang, Xinchao Wang*   \n> *新加坡国立大学 (National University of Singapore)*  \n\n#### 为什么选择 LLM-Pruner\n- [x] **任务无关的压缩 (Task-agnostic compression)**：压缩后的 LLM 应保留其作为多任务求解器的原始能力。 \n- [x] **较少的训练语料 (Less training corpus)**：在本工作中，我们仅使用 5 万条公开可用的样本 (alpaca) 对 LLM 进行后训练 (Post-training)。  \n- [x] **高效压缩 (Efficient compression)**：剪枝耗时 3 分钟，后训练耗时 3 小时。（您可以根据需要延长）\n- [x] **自动结构化剪枝 (Automatic structural pruning)**：以最少的人工干预剪枝新的 LLM。（进行中）。\n\n#### 支持的 LLM：\n- [x] [Llama-3.1](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmeta-llama\u002Fllama-31-669fc079a0c406a149a5738f)\n- [x] [Llama-3](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fmeta-llama\u002Fmeta-llama-3-66214712577ca38149ebb2b6)\n- [x] [Llama-2](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#1-pruning-discovery-stage--estimation-stage)\n- [x] [LLaMA](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#1-pruning-discovery-stage--estimation-stage)\n- [x] [BLOOM](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#cherry_blossom-bloom) \n- [x] [Vicuna](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#llama-vicuna-pruning)\n- [x] [Baichuan](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#llama-baichuan-pruning)\n- [x] [TinyLlama](https:\u002F\u002Fgithub.com\u002Fjzhang38\u002FTinyLlama) \n\n#### 更新日志：\n* 2024 年 7 月 27 日：:rocket: 支持 GQA (群查询注意力)! 现在 LLM-Pruner 可以在 Llama3 和 Llama 3.1 上运行。我们仍在测试新 LLM（Llama3, Llama3.1, Gemma）的剪枝结果，您可以在 [此处](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fmore_results#more-results) 找到剪枝结果。\n* 2023 年 8 月 30 日：LLM-Pruner 现在支持 [BLOOM](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fbloom) :cherry_blossom:\n* 2023 年 8 月 14 日：[代码](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#2-post-training-recover-stage) 和 [结果](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#2-post-training-recover-stage) 现已支持使用大规模语料进行微调 (Finetuning)。微调后的 LLaMA-5.4B 模型平均准确率达到 62.36%，非常接近原始 LLaMA-7B (63.25%)。\n* 2023 年 7 月 19 日：:fire: LLM-Pruner 现在支持 Llama-2-7b 和 Llama-2-13b（Huggingface 版本）\n* 2023 年 7 月 18 日：:rocket: 支持 [Baichuan](https:\u002F\u002Fgithub.com\u002Fbaichuan-inc\u002FBaichuan-7B)，一款双语 LLM。\n* 2023 年 5 月 20 日：:tada: 代码和预印本论文发布！\n\n#### 待办事项 (TODO List)：\n- [ ] 针对新 LLM 剪枝的教程。\n- [ ] 支持使用 `.from_pretrained()` 加载模型。\n\n#### **联系我们：**\n加入我们的微信群进行交流：\n  * 微信群 [群组-2](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F3fe4c487-5a5b-43fd-bf64-a5ee62c3dec1) (>200\u002F500), [群组-1](https:\u002F\u002Fgithub.com\u002FVainF\u002FTorch-Pruning\u002Fassets\u002F18592211\u002F35d66130-eb03-4dcb-ad75-8df784460ad3) (500\u002F500, 已满).\n\n\n\n## 目录\n  - [快速开始](#quick-start)\n  - [逐步说明](#step-by-step-instructions)\n  - [零样本评估 (Zero-shot Evaluation)](#zero-shot-evaluation)\n  - [更多示例](#more-examples)\n  - [版本信息](#version-information)\n  - [局限性](#limitations)\n  - [致谢](#acknowledgement)\n  - [引用](#citation)\n\n## 快速开始\n\n### 安装\n```\npip install -r requirement.txt\n```\n\n### 最小示例\n```\nbash script\u002Fllama_prune.sh\n```\n此脚本将压缩 LLaMA-7B 模型，剪去约 20\\% 的参数。所有预训练模型和数据集都将自动下载，因此您无需手动下载资源。首次运行此脚本时，需要一些时间来下载模型和数据集。\n\n    \n## 逐步说明  \n    \n剪枝一个 LLM 需要三个步骤：\n* \u003Cu>发现阶段 (Discovery Stage)\u003C\u002Fu>：发现 LLM 中复杂的相互依赖关系，并找到可移除的最小单元，即**组 (group)**。\n* \u003Cu>估计阶段 (Estimation Stage)\u003C\u002Fu>：估计每个组对模型整体性能的贡献，并决定剪枝哪个组。 \n* \u003Cu>恢复阶段 (Recover Stage)\u003C\u002Fu>：通过快速后训练恢复模型性能。\n  \n剪枝和后训练完成后，我们遵循 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness\">lm-evaluation-harness\u003C\u002Fa> 进行评估。\n\n### 1. 剪枝（发现阶段 + 估计阶段）\n\n:llama: **LLaMA\u002FLlama-2 模型剪枝，约剪去 20% 参数：**\n```\npython hf_prune.py --pruning_ratio 0.25 \\\n      --block_wise \\\n      --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n      --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n      --pruner_type taylor \\\n      --test_after_train \\\n      --device cpu  --eval_device cuda \\\n      --save_ckpt_log_name llama_prune \n```\n参数说明：\n- ``基础模型``：从 LLaMA 或 Llama-2 中选择基础模型，并将 `pretrained_model_name_or_path` 传递给 `--base_model`。模型名称用于 `AutoModel.from_pretrained` 加载预训练的大语言模型（LLM）。例如，如果你想使用 130 亿参数的 llama-2，则将 `meta-llama\u002FLlama-2-13b-hf` 传递给 `--base_model`。\n- ``剪枝策略``：使用相应的命令选项选择块级（block-wise）、通道级（channel-wise）或层级（layer-wise）剪枝：{--block_wise}，{--channel_wise}，{--layer_wise --layer NUMBER_OF_LAYERS}。对于块级剪枝，指定要剪枝的起始和结束层。通道级剪枝不需要额外参数。对于层级剪枝，使用 --layer NUMBER_OF_LAYERS 指定剪枝后保留的层数。\n- ``重要性准则``：使用 --pruner_type 参数从 l1, l2, random 或 taylor 中选择。对于泰勒剪枝器（Taylor pruner），选择以下选项之一：vectorize, param_second, param_first, param_mix。默认使用 param_mix，它结合了近似二阶海森矩阵（Hessian）和一阶梯度（Gradient）。如果使用 l1, l2 或 random，则不需要额外参数。\n- ``剪枝比例``：指定组的剪枝比例。它与参数剪枝率不同，因为组是最小的移除单位。\n- ``设备`` 和 ``评估设备``：剪枝和评估可以在不同的设备上执行。基于泰勒的方法在剪枝期间需要反向计算，这可能需要大量的 GPU 显存。我们的实现使用 CPU 进行重要性估计（也支持 GPU，只需使用 --device cuda）。eval_device 用于测试剪枝后的模型。\n\n\n#### :llama: Vicuna 剪枝\n\n\u003Cdetails>\n\u003Csummary>详情：\u003C\u002Fsummary>\n  \n如果你想尝试 Vicuna，请将参数 `--base_model` 指定为 Vicuna 权重的路径。请遵循 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\">https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u003C\u002Fa> 获取 Vicuna 权重。\n```\npython hf_prune.py --pruning_ratio 0.25 \\\n      --block_wise \\\n      --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n      --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n      --pruner_type taylor \\\n      --test_after_train \\\n      --device cpu  --eval_device cuda \\\n      --save_ckpt_log_name llama_prune \\\n      --base_model PATH_TO_VICUNA_WEIGHTS\n```\n\n\u003C\u002Fdetails>\n\n\n#### :llama: Baichuan 剪枝\n\n\u003Cdetails>\n\u003Csummary>详情：\u003C\u002Fsummary>\n  \n有关更多详细信息，请参阅 [Example\u002FBaichuan](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#llama-baichuan-pruning)。\n\n\u003C\u002Fdetails>\n\n#### :llama: Llama3\u002FLlama3.1 剪枝\n\n\u003Cdetails>\n\u003Csummary>详情：\u003C\u002Fsummary>\n  \n```\npython llama3.py --pruning_ratio 0.25 \\\n                 --device cuda --eval_device cuda \\\n                 --base_model meta-llama\u002FMeta-Llama-3-8B-Instruct \\\n                 --block_wise --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n                 --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n                 --save_ckpt_log_name llama3_prune \\\n                 --pruner_type taylor --taylor param_first \\\n                 --max_seq_len 2048 \\\n                 --test_after_train --test_before_train --save_model \n```\n\n\u003C\u002Fdetails>\n    \n### 2. 后训练（恢复阶段）\n\n* 使用 Alpaca 进行训练，包含 50,000 个样本。以下是单 GPU 训练的示例：\n```\nCUDA_VISIBLE_DEVICES=X python post_training.py --prune_model prune_log\u002FPATH_TO_PRUNE_MODEL\u002Fpytorch_model.bin \\\n      --data_path yahma\u002Falpaca-cleaned \\\n      --lora_r 8 \\\n      --num_epochs 2 \\ \n      --learning_rate 1e-4 \\ \n      --batch_size 64\\\n      --output_dir tune_log\u002FPATH_TO_SAVE_TUNE_MODEL \\ \n      --wandb_project llama_tune\n```\n请确保将 `PATH_TO_PRUNE_MODEL` 替换为步骤 1 中剪枝模型的路径，并将 `PATH_TO_SAVE_TUNE_MODEL` 替换为你希望保存微调模型的位置。\n\n**提示**：[在 float16 下训练 LLaMA-2 不推荐，已知会产生 nan；因此，模型应在 bfloat16 下训练。](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Ftransformers\u002Fmodel_doc\u002Fllama2#usage-tips)\n\n* 使用 [MBZUAI\u002FLaMini-instruction](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FMBZUAI\u002FLaMini-instruction) 进行训练，包含 2.59M 个样本。以下是使用多 GPU 进行训练的示例：\n```\ndeepspeed --include=localhost:1,2,3,4 post_training.py \\\n      --prune_model prune_log\u002FPATH_TO_PRUNE_MODEL\u002Fpytorch_model.bin \\\n      --data_path MBZUAI\u002FLaMini-instruction  \\\n      --lora_r 8 \\\n      --num_epochs 3  \\\n      --output_dir tune_log\u002FPATH_TO_SAVE_TUNE_MODEL \\\n      --extra_val_dataset wikitext2,ptb \\\n      --wandb_project llmpruner_lamini_tune \\\n      --learning_rate 5e-5 \\\n      --cache_dataset\n```\n\n### 3. 生成\n\n#### 如何加载剪枝\u002F预训练模型：\n\n对于剪枝模型，只需使用以下命令加载你的模型。 \n``` \n  pruned_dict = torch.load(YOUR_CHECKPOINT_PATH, map_location='cpu')\n  tokenizer, model = pruned_dict['tokenizer'], pruned_dict['model']\n```\n由于剪枝模型中模块之间的配置不同，某些层可能具有更大的宽度，而其他层则经过了更多的剪枝，因此无法像 Hugging Face 提供的那样使用 `.from_pretrained()` 加载模型。目前，我们采用 `torch.save` 来存储剪枝模型。\n  \n由于剪枝模型在每一层的配置不同，比如某些层可能更宽，但某些层被剪枝得更多，模型无法通过 Hugging Face 的 `.from_pretrained()` 加载。目前，我们简单地使用 `torch.save` 保存剪枝模型，并使用 `torch.load` 加载剪枝模型。\n  \n#### 使用 Gradio 界面进行生成\n我们提供了一个简单的脚本，用于使用预训练\u002F剪枝模型\u002F经过后训练的剪枝模型生成文本。 \n    \n* LLaMA-7B 预训练模型\n```\npython generate.py --model_type pretrain\n```\n* 未经过后训练的剪枝模型\n```\npython generate.py --model_type pruneLLM --ckpt \u003CYOUR_MODEL_PATH_FOR_PRUNE_MODEL>\n```\n* 经过后训练的剪枝模型 \n```\npython generate.py --model_type tune_prune_LLM --ckpt \u003CYOUR_CKPT_PATH_FOR_PRUNE_MODEL> --lora_ckpt \u003CYOUR_CKPT_PATH_FOR_LORA_WEIGHT>\n```\n\n上述指令将在本地部署您的大语言模型（LLM）。 \n  \n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_1c381262caee.png\" width=\"100%\">\u003C\u002Fimg>\n\u003C\u002Fdiv>\n\n### 4. 评估\n为了评估剪枝模型（pruned model）的性能，我们遵循 [lm-evaluation-harness（语言模型评估工具集）](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness) 来评估模型：\n* 步骤 1：如果您只需要评估剪枝模型，请跳过此步骤并跳转到步骤 2。\n此步骤是为了整理文件以满足 `lm-evaluation-harness` 的输入要求。[来自后训练阶段的微调检查点](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner#2-post-training-recover-stage) 将按以下格式保存：\n```\n- PATH_TO_SAVE_TUNE_MODEL\n  | - checkpoint-200\n      | - pytorch_model.bin\n      | - optimizer.pt\n      ...\n  | - checkpoint-400\n  | - checkpoint-600\n  ...\n  | - adapter_config.bin\n  | - adapter-config.json\n```\n通过以下命令整理文件：\n```\ncd PATH_TO_SAVE_TUNE_MODEL\nexport epoch=YOUR_EVALUATE_EPOCH\ncp adapter_config.json checkpoint-$epoch\u002F\nmv checkpoint-$epoch\u002Fpytorch_model.bin checkpoint-$epoch\u002Fadapter_model.bin\n```\n如果您想评估 `checkpoint-200`，则通过 `export epoch=200` 将 epoch（轮次）设置为 200。\n\n\n* 步骤 2：\n```\nexport PYTHONPATH='.'\npython lm-evaluation-harness\u002Fmain.py --model hf-causal-experimental \\\n       --model_args checkpoint=PATH_TO_PRUNE_MODEL,peft=PATH_TO_SAVE_TUNE_MODEL,config_pretrained=PATH_OR_NAME_TO_BASE_MODEL \\\n       --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \\\n       --device cuda:0 --no_cache \\\n       --output_path PATH_TO_SAVE_EVALUATION_LOG \n```\n在此处，将 `PATH_TO_PRUNE_MODEL` 和 `PATH_TO_SAVE_TUNE_MODEL` 替换为您保存剪枝模型和微调模型的路径，而 `PATH_OR_NAME_TO_BASE_MODEL` 用于加载基础模型的配置文件。 \n\n[更新]：如果您想使用微调检查点评估剪枝模型，我们上传了一个脚本来简化评估过程。直接使用以下命令：\n```\nCUDA_VISIBLE_DEVICES=X bash scripts\u002Fevaluate.sh PATH_OR_NAME_TO_BASE_MODEL PATH_TO_SAVE_TUNE_MODEL  PATH_TO_PRUNE_MODEL EPOCHS_YOU_WANT_TO_EVALUATE\n```\n在命令中替换您模型的必要信息。最后一个参数用于在一个命令中迭代不同的 epoch，以便评估多个检查点。例如：\n```\nCUDA_VISIBLE_DEVICES=1 bash scripts\u002Fevaluate.sh decapoda-research\u002Fllama-7b-hf tune_log\u002Fllama_7B_hessian prune_log\u002Fllama_prune_7B 200 1000 2000\n```\n\n\n### 5. 测试 MACs（乘加运算次数）、Params（参数量）和内存\n\n* 预训练\n```\npython test_speedup.py --model_type pretrain\n```\n* 剪枝模型\n```\npython test_speedup.py --model_type pruneLLM --ckpt \u003CYOUR_MODEL_PATH_FOR_PRUNE_MODEL>\n```\n\n## 零样本 (Zero-shot) 评估\n\nLLaMA-7B 的简要定量结果：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_ad406ed42d87.png\" width=\"100%\"> \u003Cbr>\n\u003C\u002Fp>\n    \nVicuna-7B 的结果：\n    \n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_7a6cf4c94edb.png\" width=\"100%\"> \u003Cbr>\n\u003C\u002Fp>\n    \nChatGLM-6B 的结果：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_0ac185972578.png\" width=\"80%\"> \u003Cbr>\n\u003C\u002Fp>\n\n剪枝模型的统计数据：\n\n\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_readme_aaec1afcf26d.png\" width=\"50%\"> \u003Cbr>\n\u003C\u002Fp>\n\nLLM-Pruner 使用 2.59M 样本的结果：\n| 剪枝比例 | 参数量 | 内存     | 延迟 | 加速比 | BoolQ | PIQA  | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA  | 平均 |\n|---------------|--------|------------|---------|---------|-------|-------|-----------|------------|-------|-------|-------|---------|\n| LLaMA-7B      | 6.74B  | 12884.5MiB | 69.32s  | 1x      | 73.18 | 78.35 | 72.99     | 67.01      | 67.45 | 41.38 | 42.40 | 63.25   |\n| LLaMA-5.4B with Alpaca(50k)    | 5.47B  | 10488.4MiB | 58.55s  | 1.18x   | 64.62 | 77.20 | 68.80     | 63.14      | 64.31 | 36.77 | 39.80 | 59.23   |\n| LLaMA-5.4B with LaMini(2.59M)  | 5.47B  | 10488.4MiB | 58.55s  | 1.18x   | 76.57 | 77.37 | 66.60     | 65.82      | 70.62 | 40.70 | 38.80 | 62.36   |\n\n\n更多结果可在论文中找到。 \n\n## 更多示例\n\n| 原始模型 | 输出 | 剪枝模型 | 输出 |\n|--|--|--|--| \n| LLaMa-7B | \u003Cu>The universe is the entirety of space, time, matter, and energy that exists.\u003C\u002Fu> It includes all matter and energy, whether visible or not, and in fact the whole universe is invisible. There are many theories regarding its origin and ultimate fate... | LLaMa-5.4B | \u003Cu>The universe is the entirety of space, time, matter, and energy that exists.\u003C\u002Fu> The laws that make up physics — electrons orbiting atoms, magnets attracting other objects, planets orbiting the Sun—have been consistent over ...|\n| Vicuna-7B | \u003Cu>10 steps to build an ios app:\u003C\u002Fu> 1. Research and develop the idea. 2. Decide on the platform you want to use (iOS, Android, or both). 3. Create a plan and budget. 4. Hire a team of developers. 5. Design the user interface (UI). 6. Develop the user interface (UI). 7. Test and iterate on the app. 8. Submit the app to the app store. 9. Optimize the app for performance and user experience (UX). 10. Market and promote the app. | Vicuna-5.4B | \u003Cu>10 steps to build an ios app:\u003C\u002Fu> 1. Plan your app: decide what it should do, what it will look like, and what it will cost to build it. 2. Hire a developer: A good iOS developer will help you design the user interface, create the app, and make sure it's bug-free. 3. Create a user interface: This is where the magic happens. Your app's user interface will be designed and mocked up at this point... |\n| ChatGLM-6B | \u003Cu>Write a poetry about love and peace.\u003C\u002Fu> In the midst of life's hustle and bustle, I find your love a soothing peace. It is a feeling that cannot be conveyed, A connection that cannot be distanced. With your smile, your eyes, your embrace, I feel a sense of belonging that only you can give. It is a love that lasts forever, A love that cannot be defeat... | ChatGLM-5.6B | \u003Cu>Write a poetry about love and peace.\u003C\u002Fu> Love and peace, two such wonderful things\\ A pure and endless desire in my heart\\ And both of them, I must seek for\\ A long, long time, I know..\\ Love, I know, is a feeling of being\\ A perfect partner, in every sense\\ And peace, I need it, so much, one day\\ A long, long way, my heart will go..|\n\n## 版本信息\n由于本项目使用的模型和仓库版本发生变化，我们列出了一些已知的版本问题以及复现我们方法所需的具体版本：\n1. lm-eval-harness：我们使用了 lm-evaluation-harness 的 [此 commit](https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness\u002Ftree\u002F4d21ab6b2713cc3a8b4fa7574e89c62ef504e75f)，代码也包含在此仓库中。详细信息请查看 [Issue #25](https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Fissues\u002F25)。\n2. LLaMA1-7B：我们在实验中使用的是 [decapoda-research\u002Fllama-7b-hf](https:\u002F\u002Fhuggingface.co\u002Fdecapoda-research\u002Fllama-7b-hf) 的检查点，目前该链接不可用。请考虑使用复制的版本，例如 [baffo32\u002Fdecapoda-research-llama-7B-hf](https:\u002F\u002Fhuggingface.co\u002Fbaffo32\u002Fdecapoda-research-llama-7B-hf)。\n\n## 局限性\n* 尽管我们仅使用了 5 万条数据并训练了三个小时，但更多的数据肯定会更好。我们正在对此进行测试。\n* 当前的压缩模型仍然存在一些问题，例如生成重复的 token(词元) 或产生无意义的句子。我们相信压缩模型的质量还有很大的提升空间。\n* 仍然有一些模型，我们无法在 concatenation(拼接) 和 view(视图) 操作后自动识别索引的映射关系。因此，我们需要执行额外的手动操作。 \n\n\n## 致谢\n* Logo 由 \u003Ca href=\"https:\u002F\u002Fdreamstudio.ai\u002Fgenerate\">Stable Diffusion\u003C\u002Fa> 生成\n* 大语言模型 (LLM) 评估： \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FEleutherAI\u002Flm-evaluation-harness\">lm-evaluation-harness\u003C\u002Fa>\n* LLaMA: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\"> https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama\u003C\u002Fa>\n* Vicuna: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\">https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u003C\u002Fa>\n* Peft: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpeft\">https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fpeft\u003C\u002Fa>\n* Alpaca-lora: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftloen\u002Falpaca-lora\">https:\u002F\u002Fgithub.com\u002Ftloen\u002Falpaca-lora\u003C\u002Fa>\n\n## 引用\n如果您觉得本项目有用，请引用\n```\n@inproceedings{ma2023llmpruner,\n  title={LLM-Pruner: On the Structural Pruning of Large Language Models},\n  author={Xinyin Ma and Gongfan Fang and Xinchao Wang},\n  booktitle={Advances in Neural Information Processing Systems},\n  year={2023},\n}\n```\n```\n@article{fang2023depgraph,\n  title={DepGraph: Towards Any Structural Pruning},\n  author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},\n  journal={The IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n  year={2023}\n}\n```","# LLM-Pruner 快速上手指南\n\n**LLM-Pruner** 是一款专注于大语言模型（LLM）结构剪枝的开源工具，支持将主流 LLM 压缩至任意大小，同时保留其多任务处理能力。支持 Llama、Vicuna、Baichuan、BLOOM 等模型架构。\n\n## 环境准备\n\n*   **系统要求**: Linux \u002F Windows \u002F macOS\n*   **Python 版本**: 兼容标准 Python 环境\n*   **深度学习框架**: PyTorch >= v1.7.1\n*   **硬件建议**: \n    *   剪枝与后训练阶段建议使用 GPU 以加速计算。\n    *   使用 Taylor 剪枝策略时，需确保显存充足（也可在 CPU 上进行重要性估计，但速度较慢）。\n*   **网络提示**: 首次运行会自动下载预训练模型和数据集，国内用户建议配置 HuggingFace 镜像以提升下载速度。\n\n## 安装步骤\n\n1.  **克隆项目代码**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner.git\n    cd LLM-Pruner\n    ```\n\n2.  **安装依赖**\n    ```bash\n    pip install -r requirement.txt\n    ```\n\n## 基本使用\n\n### 1. 一键剪枝（最小示例）\n项目提供了一个自动化脚本，可自动下载资源并压缩 LLaMA-7B 模型（约剪枝 20% 参数）。\n\n```bash\nbash script\u002Fllama_prune.sh\n```\n*注：首次运行需要时间下载模型和数据集，请耐心等待。*\n\n### 2. 自定义剪枝流程\n若需更精细控制，可使用命令行直接调用 `hf_prune.py`。以下示例演示了基于 LLaMA\u002FLlama-2 的块级剪枝（Block-wise Pruning）：\n\n```bash\npython hf_prune.py --pruning_ratio 0.25 \\\n      --block_wise \\\n      --block_mlp_layer_start 4 --block_mlp_layer_end 30 \\\n      --block_attention_layer_start 4 --block_attention_layer_end 30 \\\n      --pruner_type taylor \\\n      --test_after_train \\\n      --device cpu  --eval_device cuda \\\n      --save_ckpt_log_name llama_prune\n```\n\n**关键参数说明：**\n*   `--base_model`: 指定基础模型路径（如 `meta-llama\u002FLlama-2-13b-hf`）。\n*   `--pruning_ratio`: 剪枝比例（按组移除，非参数百分比）。\n*   `--pruner_type`: 重要性准则（可选 `l1`, `l2`, `random`, `taylor`）。\n*   `--device`: 剪枝计算设备（Taylor 方法建议 GPU，CPU 亦可）。\n\n### 3. 后训练恢复（Post-Training）\n剪枝后需使用少量数据（如 Alpaca 50k 样本）进行微调以恢复性能。\n\n**单卡训练示例：**\n```bash\nCUDA_VISIBLE_DEVICES=X python post_training.py --prune_model prune_log\u002FPATH_TO_PRUNE_MODEL\u002Fpytorch_model.bin \\\n      --data_path yahma\u002Falpaca-cleaned \\\n      --lora_r 8 \\\n      --num_epochs 2 \\ \n      --learning_rate 1e-4 \\ \n      --batch_size 64 \\\n      --output_dir tune_log\u002FPATH_TO_SAVE_TUNE_MODEL \\ \n      --wandb_project llama_tune\n```\n\n### 4. 加载模型\n由于剪枝后的模型结构发生变化，无法直接使用 HuggingFace 的 `.from_pretrained()`，请使用以下方式加载：\n\n```python\npruned_dict = torch.load(YOUR_CHECKPOINT_PATH, map_location='cpu')\ntokenizer, model = pruned_dict['tokenizer'], pruned_dict['model']\n```","某电商初创团队计划将 Llama-3-8B 模型部署到本地边缘服务器以构建私有客服机器人，但面临显存不足和推理成本过高的严峻挑战。\n\n### 没有 LLM-Pruner 时\n- 原始 8B 模型体积庞大，需要昂贵的多卡 GPU 集群才能勉强运行，硬件预算超支。\n- 推理延迟高达数秒，用户等待时间过长，严重影响客服交互体验。\n- 若强行蒸馏为小模型，会丢失关键逻辑处理能力，导致回答质量大幅下降。\n- 缺乏高效的压缩方案，团队只能在“高性能高成本”与“低成本低性能”之间妥协。\n\n### 使用 LLM-Pruner 后\n- LLM-Pruner 通过结构剪枝将模型压缩至 4B 规模，显存占用减少近 50%，单卡即可流畅部署。\n- 仅需 5 万条公开样本进行后训练，便保留了原模型在问答场景下的核心逻辑与准确性。\n- 推理速度提升明显，响应时间缩短至毫秒级，满足了实时对话的流畅性要求。\n- 自动化剪枝流程无需大量人工干预，团队在三天内完成了模型适配并上线测试。\n\nLLM-Pruner 成功实现了大模型的结构化压缩，让高性能模型在资源受限环境下也能低成本落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhorseee_LLM-Pruner_79c2349b.png","horseee","Ma Xinyin","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhorseee_52d6b46b.jpg","Ph.D. Candidate\r\n @ NUS XML Lab🤔","National University of Singapore","Singapore","maxinyin@u.nus.edu","horseeeMa","horseee.github.io","https:\u002F\u002Fgithub.com\u002Fhorseee",[86,90,94],{"name":87,"color":88,"percentage":89},"Python","#3572A5",99.3,{"name":91,"color":92,"percentage":93},"C++","#f34b7d",0.5,{"name":95,"color":96,"percentage":97},"Shell","#89e051",0.2,1115,131,"2026-04-04T11:18:16","Apache-2.0","未说明","需要 NVIDIA GPU 及 CUDA 环境，Taylor 剪枝法需较大显存，具体视模型规模而定",{"notes":105,"python":102,"dependencies":106},"首次运行自动下载模型和数据集；LLaMA-2 训练推荐 bfloat16 精度；剪枝模型需用 torch.load 加载；支持 GQA 架构；后训练可用 Alpaca 或 LaMini 数据集。",[107,108,109,110],"torch>=1.7.1","transformers","deepspeed","wandb",[13,26],[113,114,115,116,117,118,119,120,121,122,123,124,125],"compression","language-model","llm","pruning","pruning-algorithms","baichuan","chatglm","llama","vicuna","llama-2","bloom","neurips-2023","llama3",null,"2026-03-27T02:49:30.150509","2026-04-06T05:27:29.671760",[130,135,140,145,149,154],{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},2668,"如何使用 LLM-Pruner 对 Baichuan 模型进行剪枝？","LLM-Pruner 已更新支持最新的 Baichuan-13B-chat 模型剪枝及后训练代码。之前存在的 `post_training.py` 中 `RuntimeError: element 0 of tensors does not require grad` 的 bug 已修复。请参照官方文档中的示例指令进行操作：https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Ftree\u002Fmain\u002Fexamples#llama-baichuan","https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Fissues\u002F11",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},2669,"`test_speedup.py` 脚本运行时出现 RuntimeError 怎么办？","该脚本目前主要支持在 GPU 上测试。如果遇到错误，请尝试更新到最新版本，维护者已修复了可能导致此问题的 bug。如果在 CPU 上运行正常但 GPU 仍报错，建议提供您的运行环境信息以便进一步排查。","https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Fissues\u002F7",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2670,"剪枝后的模型无法使用 `from_pretrained()` 加载，该如何推理？","由于剪枝后各层配置可能不一致（如某些层宽度不同），暂时无法直接使用 `from_pretrained()` 加载。建议使用项目提供的 `generate.py` 脚本，或采用以下手动加载方式：\n```python\ncheckpoint = torch.load('pytorch_model.bin', map_location='cpu')\nmodel, tokenizer = checkpoint['model'], checkpoint['tokenizer']\n```","https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Fissues\u002F3",{"id":146,"question_zh":147,"answer_zh":148,"source_url":144},2671,"剪枝后的模型是否支持使用 `model.from_pretrained()` 加载？","目前暂不支持。原因是剪枝导致模型各层配置不一致，无法统一加载。维护者正在努力改进以支持此功能，在此之前请继续使用手动加载 checkpoint 的方式（见 Issue #3）进行推理。",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},2672,"在 CPU 上剪枝 LLaMA2 时出现 `addmm_impl_cpu_` 错误如何解决？","这是因为在使用 `--device cpu` 和 `--save_model` 参数时，模型会被转换为 half 精度，而 CPU 不支持该操作。解决方法是在 `torch.save` 之后添加 `model.float()` 将模型转回 float 精度，即可解决该错误。","https:\u002F\u002Fgithub.com\u002Fhorseee\u002FLLM-Pruner\u002Fissues\u002F22",{"id":155,"question_zh":156,"answer_zh":157,"source_url":153},2673,"剪枝后的 LLaMA2 模型推理时报错，提示维度计算问题怎么办？","LLaMA2 的代码需要修改以适应更新后的属性。部分维度计算在官方代码中被固定，不适合剪枝模型的推理。解决方法是修改 `modeling_llama.py` 中的固定属性，例如手动设置 `self.num_key_value_heads` 等关键维度参数。",[]]