[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-rohan-paul--LLM-FineTuning-Large-Language-Models":3,"tool-rohan-paul--LLM-FineTuning-Large-Language-Models":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":94,"forks":95,"last_commit_at":96,"license":81,"difficulty_score":10,"env_os":97,"env_gpu":98,"env_ram":99,"env_deps":100,"category_tags":114,"github_topics":115,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":128,"updated_at":129,"faqs":130,"releases":136},1313,"rohan-paul\u002FLLM-FineTuning-Large-Language-Models","LLM-FineTuning-Large-Language-Models","LLM (Large Language Model) FineTuning","LLM-FineTuning-Large-Language-Models 是一份面向实践的“大模型微调指南+代码仓库”。它把 Llama-3、Mistral、Gemma、Falcon 等主流开源模型的微调流程拆成可直接运行的 Jupyter Notebook，并配套中文讲解视频，手把手教你用 4bit 量化、QLoRA、PEFT 等技巧，在单张 GPU 甚至 Colab 免费额度里就能把模型“炼”成自己的专属助手。  \n它解决了“想微调却不会下手、官方文档太简略、显卡又不够”的常见痛点：示例覆盖对话、代码、超长文本、网页抓取等场景，一行命令即可复现。  \n适合 AI 开发者、算法研究员、高校学生，以及对大模型落地感兴趣的技术团队。即使只有基础 Python 知识，也能跟着视频和 Notebook 快速跑通第一个微调实验。","# LLM (Large Language Models) FineTuning Projects and notes on common practical techniques\n\n# [Find me in Twitter](https:\u002F\u002Ftwitter.com\u002Frohanpaul_ai)\n\n## [📚 I write daily for my 112K+ readers on actionable AI developments. Get a 1300+ page Python book as soon as you subscribing (its FREE) ↓↓)](https:\u002F\u002Fwww.rohan-paul.com\u002Fs\u002Fdaily-ai-newsletter\u002Farchive?sort=new)\n\n[logo]: https:\u002F\u002Fgithub.com\u002Frohan-paul\u002Frohan-paul\u002Fblob\u002Fmaster\u002Fassets\u002Fnewsletter_rohan.png\n\n[![Rohan's Newsletter][logo]](https:\u002F\u002Fwww.rohan-paul.com\u002F) &nbsp;\n\n\n\n### Fine-tuning LLM (and YouTube Video Explanations)\n\n| Notebook | 🟠 **YouTube Video**|\n| -------- | ---------------------- |\n| [Finetune Llama-3-8B with unsloth 4bit quantized with ORPO](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLlama_3_Finetuning_ORPO_with_Unsloth.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6ikUpJcDrPs&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=31) |\n| [Llama-3 Finetuning on custom dataset with unsloth](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLlama-3_Finetuning_on_custom_dataset_with_unsloth.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AmVEGPS9JIg&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=25) |\n| [CodeLLaMA-34B - Conversational Agent ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FCodeLLaMA_34B_Conversation_with_Streamlit.py) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=815NpXvniIg&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=16&ab_channel=Rohan-Paul-AI) |\n| [Inference Yarn-Llama-2-13b-128k with KV Cache to answer quiz on very long textbook](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FInference_Yarn-Llama-2-13b-128k_Github.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RYTOQERqVsg&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=14&ab_channel=Rohan-Paul-AI)|\n| [Mistral 7B FineTuning with_PEFT and QLORA](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_FineTuning_with_PEFT_and_QLORA.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6DGYj1EEWOw&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=13&ab_channel=Rohan-Paul-AI)|\n| [Falcon finetuning on openassistant-guanaco](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFalcon-7B_FineTuning_with_PEFT_and_QLORA.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fEzuBFi35J4&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=11&ab_channel=Rohan-Paul-AI)|\n| [Fine Tuning Phi 1_5 with PEFT and QLoRA](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFineTuning_phi-1_5_with_PRFT_LoRA.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=J0RbOtLrJhQ&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=10&ab_channel=Rohan-Paul-AI)|\n| [Web scraping with Large Language Models (LLM)-AnthropicAI + LangChainAI](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FWeb%20scraping%20with%20Large%20Language%20Models%20(LLM)-AnthropicAI%20%2B%20LangChainAI.ipynb) | [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QAY82UvrsHg&list=PLxqBkZuBynVTiTEvP6-GYf35yA6OqIN7Y&index=2&ab_channel=Rohan-Paul-AI)|\n\n\n---------------------------\n\n### Fine-tuning LLM\n\n| Notebook | Colab |\n| -------- | ------------- |\n| 📌 [Gemma_2b_finetuning_ORPO_full_precision](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Fgemma-2b_ORPO_FineTuning_full_precision\u002Fv2_Colab_Gemma_2b_orpo.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Fgemma-2b_ORPO_FineTuning_full_precision\u002FGemma_2b_orpo_full_precision_Colab.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Jamba_Finetuning_Colab-Pro](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftinyllama_fine-tuning_Taylor_Swift.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FJamba_Finetuning_Colab-Pro.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Finetune codellama-34B with QLoRA](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFinetune_codellama-34B-with-QLoRA.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFinetune_codellama-34B-with-QLoRA.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Mixtral Chatbot with Gradio](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMixtral_Chatbot_with_Gradio)|\n| 📌 [togetherai api to run Mixtral](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftogetherai-api-with_Mixtral.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftogetherai-api-with_Mixtral.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Integrating TogetherAI with LangChain 🦙](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTogetherAI_API_with_LangChain.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTogetherAI_API_with_LangChain.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Mistral-7B-Instruct_GPTQ - Finetune on finance-alpaca dataset 🦙](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral-7B-Instruct_GPTQ-finetune.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_7B_Instruct_GPTQ_finetune.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Mistral 7b FineTuning with DPO Direct_Preference_Optimization](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_7b_FineTuning_with_DPO_Direct_Preference_Optimization.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_7b_FineTuning_with_DPO_Direct_Preference_Optimization.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Finetune llama_2_GPTQ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFinetune_llama_2_GPTQ)\n| 📌 [TinyLlama with Unsloth and_RoPE_Scaling dolly-15 dataset](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTinyLlama_with_Unsloth_and_RoPE_Scaling_dolly-15k.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTinyLlama_with_Unsloth_and_RoPE_Scaling_dolly-15k.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n| 📌 [Tinyllama fine-tuning with Taylor_Swift Song lyrics](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftinyllama_fine-tuning_Taylor_Swift.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftinyllama_fine-tuning_Taylor_Swift.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n\n\n\n\n---------------------------\n\n### LLM Techniques and utils - Explained\n\n| LLM Concepts |\n| -------- |\n| 📌 [DPO (Direct Preference Optimization) training and its datasets](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FDPOTrainer.ipynb)|\n| 📌 [4-bit LLM Quantization with GPTQ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002F4-bit_LLM_Quantization_with_GPTQ.ipynb)|\n| 📌 [Quantize with HF Transformers](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FQuantize_with_HF_transformers)|\n| 📌 [Understanding rank r in LoRA and related Matrix_Math](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FUnderstanding_rank_r_in_LoRA_and_related_Matrix_Math.ipynb)|\n| 📌 [Rotary Embeddings (RopE) is one of the Fundamental Building Blocks of LlaMA-2 Implementation](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FRoPE-As-Implemented-in-LlaMa-Source-Code.ipynb)|\n| 📌 [Chat Templates in HuggingFace](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002Fapply_chat_template.ipynb)|\n| 📌 [How is Mixtral 8x7B is a dense 47Bn param model](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FMOE-Mixture-of-Experts\u002FMixtral_8x7B_MoE_Why_47Bn_param_by_Shared_Param.md)|\n| 📌 [The concept of **validation log perplexity** in LLM training - a note on fundamentals.](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FValidation_log_perplexity.md)|\n| 📌 [Why we need to identify `target_layers` for LoRA\u002FQLoRA](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002Ftargets_layers_in_peft_and_meaning_of_Rank_of_a_Matrix.ipynb)|\n| 📌 [Evaluate Token per sec](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FEvaluate_token_per_sec.ipynb)|\n| 📌 [traversing through nested attributes (or sub-modules) of a PyTorch module](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FTraverse_through_sub-modules_of_PyTorch_Model.ipynb)|\n| 📌 [Implementation of Sparse Mixtures-of-Experts layer in PyTorch from Mistral Official Repo](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FMoE_implementation_Mistral_official_Repo.ipynb)|\n| 📌 [Util method to extract a specific token's representation from the last hidden states of a transformer model.](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FSelect_last_meaningful_token_from_each_sequence.ipynb)|\n| 📌 [Convert PyTorch model's parameters and tensors to half-precision floating-point format](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FConvert_Pytorch_model_to_half_precision.ipynb)|\n| 📌 [Quantizing 🤗 Transformers models with the GPTQ method](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FQuantizing_Transformers_with_GPTQ.ipynb)|\n| 📌 [Quantize Mixtral-8x7B so it can run in 24GB GPU](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FQuantize_mixtral-instruct-awq_in_so_it_can_run_in_24GB.ipynb)|\n| 📌 [What is GGML or GGUF in the world of Large Language Models ?](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FGGUF_GGML_GPTQ-basics.md)|\n\n\n\n\n\n---------------------------\n\n## Other Smaller Language Models\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=-rqmj_tfQLo&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=34&ab_channel=Rohan-Paul-AI) [DeBERTa Fine Tuning for Amazon Review Dataset Pytorch](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FDeBERTa%20Fine%20Tuning-for%20Amazon%20Review%20Dataset%20Pytorch.ipynb)\n\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=4nNbg4bWDrQ&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=32&ab_channel=Rohan-Paul-AI) [FineTuning BERT for Multi-Class Classification on custom Dataset](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFineTuning_BERT_for_Multi_Class_Classification_Turkish)\n\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=91msLyGC-LI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=28&ab_channel=Rohan-Paul-AI) [Document STRIDE when Tokenizing with HuggingFace Transformer for NLP Projects](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=91msLyGC-LI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=28&ab_channel=Rohan-Paul-AI)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=cplo2UyNw24&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=31&ab_channel=Rohan-Paul-AI) [Fine-tuning of a PreTrained Transformer model - what really happens to the weights (parameters)]()\n\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=pqpaHeCsuVI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=30&ab_channel=Rohan-Paul-AI) [Cerebras-GPT New Large Language Model Open Sourced with Apache 2.0 License](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=pqpaHeCsuVI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=30&ab_channel=Rohan-Paul-AI)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6X0xfXMKCjM&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=29&ab_channel=Rohan-Paul-AI) [Roberta-Large Named Entity Recognition on Kaggle NLP Competition with PyTorch](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FRoberta-Large-NER-on-Kaggle-NLP%20Competition)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=EHtHF9Kvm0Y&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=28&ab_channel=Rohan-Paul-AI) [Longformer end to end with Kaggle NLP competition](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FLongformer%20end%20to%20end%20with%20Kaggle%20NLP%20competition)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=tvdIF1FU7fg&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=24) [Zero Shot Multilingual Sentiment Classification with PyTorch Lightning](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002Fzero_shot_multilingual_sentiment_classification_with_USEm)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CwLPglxw1WA&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=23) [Fine Tuning Transformer (BERT) for Customer Review Prediction | NLP | HuggingFace ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFine_Tuning_HuggingFace_Transformer_BERT_Yelp_Customer_Review_Predictions)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=30zPz5Xz-8g&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=21) [Understanding BERT Embeddings and Tokenization | NLP | HuggingFace](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FUndersting_BERT_Embedding_Vector)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fl0ow-nD8FM&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=20) [Topic Modeling with BERTopic | arxiv-abstract dataset](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTopic-modeling-with-bertopic-arxiv-abstract)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vrDdnQfav0s&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=21) [Latent Dirichlet Allocation (LDA) for Topic Modeling](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTopic_Modeling_with_LDA.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iCL1TmRQ0sk&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=19) [Adding a custom task-specific Layer to a HuggingFace Pretrained Model](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FAdd-task_specific_custom_layer_to_model.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ZvsH09XGuZ0&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=18) [Fine Tuning DistilBERT for Multiclass Text Classification](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FMulti-class-text-classifica_fine-tuning-distilbert.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=dzyDHMycx_c&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=18) [Fine Tuning BERT for Named Entity Recognition (NER)](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FYT_Fine_tuning_BERT_NER_v1.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fLqiPks4neU&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=15) [Text Summarization by Fine Tuning Transformer Model | NLP ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFine_Tuning_Pegasus_for_Text_Summarization.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HDSNjrxSwqw&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=14) [Text Summarization with Transformer - BART + T5 + Pegasus\n  ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FText_Summarization_%20BART%20_T5_Pegasus.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=oxEXBJQG27A&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=13) [Debarta-v3-large model fine tuning for Kaggle Competition Feedback-Prize | NLP](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FOther-Language_Models_BERT_related\u002FDeberta-v3-large-For_Kaggle_Competition_Feedback-Prize\u002Fdeberta-v3-large-For_Kaggle_Competition_Feedback-Prize.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SmWbKiueYVU&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=12) [Topic Modeling with BERT and Automatic Cluster Labeling](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTopic_Modeling_with_BERT_and_Automatic_cluster_labeling\u002FTopic_Modeling.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Ua_ToM-CG5Q&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=11) [Decoding strategies while generating text with GPT-2](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FDecoding_Strategies_for_text_generation\u002FDecoding_Strategies_for_text_generation.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=VrJwKdls6d4&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=12) [Fake News Classification with LSTM and Tensorflow](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFake_News_Classification_with_LSTM_Tensorflow.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=hgg2GAgDLzA&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=11) [FinBERT Sentiment Analysis for very Long Text (more than 512 Tokens) | PART 2](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFinBERT_Long_Text_Part_2.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WEAAs_0etJQ&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=9) [FinBERT Sentiment Analysis for very Long Text Corpus (more than 512 Tokens) | PART-1](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFinBERT_Long_Text_Part_2.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fwDTLQDKJTE&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=8) [Cosine Similarity between sentences with Transformers HuggingFace](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FCosine_Similarity_between_sentences_with_Transformers.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=urMUa4Nw_B8&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=7) [Zero Shot Learning - Cross Lingual Named Entity Recognition with XLM-Roberta](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FZero_Shot_Learning_multilingual-NER.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Hp8_Enwzdxk&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=6) [BERT from Hugging Face - Few Baseline Application | NLP](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FBERT_HuggingFace_Basic_Usages.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CHFiTTPeyUw&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=9) [Transformer Encoder with Scaled Dot Product from Scratch](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTransformer_From_Scratch\u002FTransformer_From_Scratch.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=_IGdekeBCoE&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=7) [Fuzzy String Matching in Natural Language Processing | NLP](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFuzzy-String-Matching.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SzSANHjYhfg&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=6) [Understanding Word Vectors usage with Spacy Word and Sentence Similarity](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FWord-Vectors-Understanding-with-Spacy.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=TxTxWAohW7E&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=5) [Named Entity Recognition NER using spaCy - Extracting Subject Verb Action](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FNamed_Entity_Recognition_NER_using_spaCy%20-%20Extracting_Subject_Verb_Action.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zcW2HouIIQg&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=5) [Fine-Tuning-DistilBert - Hugging Face Transformer for Poem Sentiment Prediction | NLP](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFine_Tuning_DistilBert_Poem_Sentiments.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0Y03waAL4Gw&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=4) [Fine Tuning BERT-Based-Uncased Hugging Face Model on Kaggle Hate Speech Dataset](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002Fbert-base-uncased-fine-tuned-kaggle-hate-speech-dataset.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=DpzQNQI-S3s&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=3) [Text Analytics of Tweet Emotion - EDA with Plotly](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FText%20Analytics%20of%20Tweet%20Emotion%20-%20EDA%20with%20Plotly.ipynb)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fbit.ly\u002F3Nk0zRA) [Sentiment analysis using TextBlob and Vader](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002Fsentiment_analysis_textblob_Vader.ipynb)\n","# 大型语言模型微调项目及常见实用技巧笔记\n\n# [在Twitter上找到我](https:\u002F\u002Ftwitter.com\u002Frohanpaul_ai)\n\n## [📚 我每天为超过11.2万名读者撰写关于可落地AI发展的文章。订阅即获一本1300多页的Python书籍（免费）↓↓)](https:\u002F\u002Fwww.rohan-paul.com\u002Fs\u002Fdaily-ai-newsletter\u002Farchive?sort=new)\n\n[logo]: https:\u002F\u002Fgithub.com\u002Frohan-paul\u002Frohan-paul\u002Fblob\u002Fmaster\u002Fassets\u002Fnewsletter_rohan.png\n\n[![Rohan的新闻通讯][logo]](https:\u002F\u002Fwww.rohan-paul.com\u002F) &nbsp;\n\n\n\n### 微调大型语言模型（附YouTube视频讲解）\n\n| 笔记本 | 🟠 **YouTube视频**|\n| -------- | ---------------------- |\n| [使用unsloth对Llama-3-8B进行4位量化并采用ORPO微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLlama_3_Finetuning_ORPO_with_Unsloth.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6ikUpJcDrPs&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=31) |\n| [使用unsloth在自定义数据集上对Llama-3进行微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLlama-3_Finetuning_on_custom_dataset_with_unsloth.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AmVEGPS9JIg&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=25) |\n| [CodeLLaMA-34B——对话代理](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FCodeLLaMA_34B_Conversation_with_Streamlit.py) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=815NpXvniIg&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=16&ab_channel=Rohan-Paul-AI) |\n| [使用KV缓存对Yarn-Llama-2-13b-128k进行推理，以解答超长教材中的测验](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FInference_Yarn-Llama-2-13b-128k_Github.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RYTOQERqVsg&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=14&ab_channel=Rohan-Paul-AI)|\n| [使用PEFT和QLORA对Mistral 7B进行微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_FineTuning_with_PEFT_and_QLORA.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6DGYj1EEWOw&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=13&ab_channel=Rohan-Paul-AI)|\n| [使用PEFT和QLORA对Falcon 7B进行微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFalcon-7B_FineTuning_with_PEFT_and_QLORA.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fEzuBFi35J4&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=11&ab_channel=Rohan-Paul-AI)|\n| [使用PEFT和QLORA对Phi 1_5进行微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFineTuning_phi-1_5_with_PRFT_LoRA.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=J0RbOtLrJhQ&list=PLxqBkZuBynVTzqUQCQFgetR97y1X_1uCI&index=10&ab_channel=Rohan-Paul-AI)|\n| [利用大型语言模型进行网页抓取（AnthropicAI + LangChainAI）](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FWeb%20scraping%20with%20Large%20Language%20Models%20(LLM)-AnthropicAI%20%2B%20LangChainAI.ipynb) | [![Youtube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QAY82UvrsHg&list=PLxqBkZuBynVTiTEvP6-GYf35yA6OqIN7Y&index=2&ab_channel=Rohan-Paul-AI)|\n\n\n---------------------------\n\n### 大语言模型微调\n\n| 笔记本 | Colab |\n| -------- | ------------- |\n| 📌 [Gemma_2b_finetuning_ORPO_full_precision](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Fgemma-2b_ORPO_FineTuning_full_precision\u002Fv2_Colab_Gemma_2b_orpo.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Fgemma-2b_ORPO_FineTuning_full_precision\u002FGemma_2b_orpo_full_precision_Colab.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [Jamba_Finetuning_Colab-Pro](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftinyllama_fine-tuning_Taylor_Swift.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FJamba_Finetuning_Colab-Pro.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [使用 QLoRA 微调 codellama-34B](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFinetune_codellama-34B-with-QLoRA.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFinetune_codellama-34B-with-QLoRA.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [基于 Gradio 的 Mixtral 聊天机器人](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMixtral_Chatbot_with_Gradio)|\n| 📌 [使用 togetherai API 运行 Mixtral](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftogetherai-api-with_Mixtral.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftogetherai-api-with_Mixtral.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [将 TogetherAI 与 LangChain 🦙 集成](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTogetherAI_API_with_LangChain.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTogetherAI_API_with_LangChain.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [Mistral-7B-Instruct_GPTQ - 在 finance-alpaca 数据集上微调 🦙](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral-7B-Instruct_GPTQ-finetune.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_7B_Instruct_GPTQ_finetune.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [使用 DPO 直接偏好优化对 Mistral 7b 进行微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_7b_FineTuning_with_DPO_Direct_Preference_Optimization.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FMistral_7b_FineTuning_with_DPO_Direct_Preference_Optimization.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [微调 llama_2_GPTQ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FFinetune_llama_2_GPTQ)\n| 📌 [使用 Unsloth 和 RoPE 缩放对 TinyLlama 进行微调，数据集为 dolly-15](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTinyLlama_with_Unsloth_and_RoPE_Scaling_dolly-15k.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FTinyLlama_with_Unsloth_and_RoPE_Scaling_dolly-15k.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n| 📌 [使用 Taylor_Swift 歌词对 Tinyllama 进行微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftinyllama_fine-tuning_Taylor_Swift.ipynb)|\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002Ftinyllama_fine-tuning_Taylor_Swift.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"在 Colab 中打开\"\u002F>\u003C\u002Fa>\n\n### 大型语言模型技术与工具详解\n\n| LLM 概念 |\n| -------- |\n| 📌 [DPO（直接偏好优化）训练及其数据集](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FDPOTrainer.ipynb)|\n| 📌 [使用 GPTQ 实现 4 位量化的大规模语言模型](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002F4-bit_LLM_Quantization_with_GPTQ.ipynb)|\n| 📌 [使用 HF Transformers 进行量化](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FQuantize_with_HF_transformers)|\n| 📌 [理解 LoRA 中的秩 r 及相关矩阵运算](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FUnderstanding_rank_r_in_LoRA_and_related_Matrix_Math.ipynb)|\n| 📌 [旋转嵌入（RopE）是 LlaMA-2 实现的基本构建模块之一](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FRoPE-As-Implemented-in-LlaMa-Source-Code.ipynb)|\n| 📌 [HuggingFace 中的聊天模板](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002Fapply_chat_template.ipynb)|\n| 📌 [Mixtral 8x7B 如何成为一款拥有 470 亿参数的稠密模型](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FMOE-Mixture-of-Experts\u002FMixtral_8x7B_MoE_Why_47Bn_param_by_Shared_Param.md)|\n| 📌 [LLM 训练中的“验证日志困惑度”概念——基础要点说明](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FValidation_log_perplexity.md)|\n| 📌 [为何需要为 LoRA\u002FQLoRA 确定“目标层”](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002Ftargets_layers_in_peft_and_meaning_of_Rank_of_a_Matrix.ipynb)|\n| 📌 [评估每秒 Token 数](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FEvaluate_token_per_sec.ipynb)|\n| 📌 [遍历 PyTorch 模块的嵌套属性（或子模块）](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FTraverse_through_sub-modules_of_PyTorch_Model.ipynb)|\n| 📌 [在 Mistral 官方仓库中实现 PyTorch 中的稀疏混合专家层](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FMoE_implementation_Mistral_official_Repo.ipynb)|\n| 📌 [从 Transformer 模型的最后隐藏状态中提取特定 Token 表示的实用方法](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FSelect_last_meaningful_token_from_each_sequence.ipynb)|\n| 📌 [将 PyTorch 模型的参数和张量转换为半精度浮点格式](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FConvert_Pytorch_model_to_half_precision.ipynb)|\n| 📌 [使用 GPTQ 方法对 🤗 Transformers 模型进行量化](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FQuantizing_Transformers_with_GPTQ.ipynb)|\n| 📌 [对 Mixtral-8x7B 进行量化，使其能在 24GB GPU 上运行](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FQuantize_mixtral-instruct-awq_in_so_it_can_run_in_24GB.ipynb)|\n| 📌 [大型语言模型领域中的 GGML 或 GGUF 是什么？](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLLM_Techniques_and_utils\u002FGGUF_GGML_GPTQ-basics.md)|\n\n\n\n\n\n---------------------------\n\n## 其他小型语言模型\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=-rqmj_tfQLo&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=34&ab_channel=Rohan-Paul-AI) [DeBERTa 在 Amazon 评论数据集上的 PyTorch 微调](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FDeBERTa%20Fine%20Tuning-for%20Amazon%20Review%20Dataset%20Pytorch.ipynb)\n\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=4nNbg4bWDrQ&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=32&ab_channel=Rohan-Paul-AI) [针对自定义数据集的多分类任务微调 BERT](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFineTuning_BERT_for_Multi_Class_Classification_Turkish)\n\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=91msLyGC-LI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=28&ab_channel=Rohan-Paul-AI) [在 NLP 项目中使用 HuggingFace Transformer 进行分词时的文档 STRIDE](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=91msLyGC-LI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=28&ab_channel=Rohan-Paul-AI)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=cplo2UyNw24&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=31&ab_channel=Rohan-Paul-AI) [预训练 Transformer 模型的微调——权重（参数）究竟发生了什么？]\n\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=pqpaHeCsuVI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=30&ab_channel=Rohan-Paul-AI) [Cerebras-GPT 新型大型语言模型以 Apache 2.0 许可证开源](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=pqpaHeCsuVI&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=30&ab_channel=Rohan-Paul-AI)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=6X0xfXMKCjM&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=29&ab_channel=Rohan-Paul-AI) [Roberta-Large 在 Kaggle NLP 竞赛中使用 PyTorch 进行命名实体识别](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FRoberta-Large-NER-on-Kaggle-NLP%20Competition)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=EHtHF9Kvm0Y&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=28&ab_channel=Rohan-Paul-AI) [Longformer 在 Kaggle NLP 竞赛中的端到端应用](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FLongformer%20end%20to%20end%20with%20Kaggle%20NLP%20competition)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=tvdIF1FU7fg&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=24) [使用 PyTorch Lightning 实现零样本多语言情感分类](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002Fzero_shot_multilingual_sentiment_classification_with_USEm)\n\n- [![Youtube Link][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CwLPglxw1WA&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=23) [微调 Transformer（BERT）用于客户评论预测 | NLP | HuggingFace ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFine_Tuning_HuggingFace_Transformer_BERT_Yelp_Customer_Review_Predictions)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=30zPz5Xz-8g&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=21) [理解BERT嵌入与分词 | 自然语言处理 | HuggingFace](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FUndersting_BERT_Embedding_Vector)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fl0ow-nD8FM&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=20) [使用BERTopic进行主题建模 | arxiv摘要数据集](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTopic-modeling-with-bertopic-arxiv-abstract)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=vrDdnQfav0s&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=21) [用于主题建模的潜在狄利克雷分配（LDA）](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTopic_Modeling_with_LDA.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iCL1TmRQ0sk&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=19) [向HuggingFace预训练模型添加自定义任务专用层](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FAdd-task_specific_custom_layer_to_model.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ZvsH09XGuZ0&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=18) [针对多分类文本分类微调DistilBERT](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FMulti-class-text-classifica_fine-tuning-distilbert.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=dzyDHMycx_c&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=18) [针对命名实体识别（NER）微调BERT](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FYT_Fine_tuning_BERT_NER_v1.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fLqiPks4neU&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=15) [通过微调Transformer模型进行文本摘要 | 自然语言处理](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFine_Tuning_Pegasus_for_Text_Summarization.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HDSNjrxSwqw&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=14) [使用Transformer进行文本摘要——BART + T5 + Pegasus\n  ](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FText_Summarization_%20BART%20_T5_Pegasus.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=oxEXBJQG27A&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=13) [Debarta-v3-large模型针对Kaggle竞赛反馈奖的微调 | 自然语言处理](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FOther-Language_Models_BERT_related\u002FDeberta-v3-large-For_Kaggle_Competition_Feedback-Prize\u002Fdeberta-v3-large-For_Kaggle_Competition_Feedback-Prize.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SmWbKiueYVU&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=12) [使用BERT和自动聚类标签进行主题建模](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTopic_Modeling_with_BERT_and_Automatic_cluster_labeling\u002FTopic_Modeling.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Ua_ToM-CG5Q&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=11) [在使用GPT-2生成文本时的解码策略](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FDecoding_Strategies_for_text_generation\u002FDecoding_Strategies_for_text_generation.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=VrJwKdls6d4&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=12) [使用LSTM和TensorFlow进行假新闻分类](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFake_News_Classification_with_LSTM_Tensorflow.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=hgg2GAgDLzA&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=11) [FinBERT针对超长文本（超过512个标记）的情感分析 | 第二部分](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFinBERT_Long_Text_Part_2.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WEAAs_0etJQ&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=9) [FinBERT针对超长文本语料库（超过512个标记）的情感分析 | 第一部分](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFinBERT_Long_Text_Part_2.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fwDTLQDKJTE&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=8) [使用HuggingFace的Transformer计算句子间的余弦相似度](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FCosine_Similarity_between_sentences_with_Transformers.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=urMUa4Nw_B8&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=7) [零样本学习——使用XLM-Roberta进行跨语言命名实体识别](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FZero_Shot_Learning_multilingual-NER.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Hp8_Enwzdxk&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=6) [来自HuggingFace的BERT——少量基准应用 | 自然语言处理](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FBERT_HuggingFace_Basic_Usages.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CHFiTTPeyUw&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=9) [从零开始构建带缩放点积的Transformer编码器](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FTransformer_From_Scratch\u002FTransformer_From_Scratch.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=_IGdekeBCoE&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=7) [自然语言处理中的模糊字符串匹配 | 自然语言处理](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFuzzy-String-Matching.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=SzSANHjYhfg&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=6) [使用Spacy理解词向量的用法——词与句子的相似性](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FWord-Vectors-Understanding-with-Spacy.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=TxTxWAohW7E&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=5) [使用spaCy进行命名实体识别NER——提取主语、谓语、宾语](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FNamed_Entity_Recognition_NER_using_spaCy%20-%20Extracting_Subject_Verb_Action.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zcW2HouIIQg&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=5) [微调DistilBert——基于Hugging Face Transformer的诗歌情感预测 | 自然语言处理](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FOther-Language_Models_BERT_related\u002FFine_Tuning_DistilBert_Poem_Sentiments.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=0Y03waAL4Gw&list=PLxqBkZuBynVTn2lkHNAcw6lgm1MD5QiMK&index=4) [在Kaggle仇恨言论数据集上微调基于BERT的未大写Hugging Face模型](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002Fbert-base-uncased-fine-tuned-kaggle-hate-speech-dataset.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=DpzQNQI-S3s&list=PLxqBkZuBynVQEvXfJpq3smfuKq3AiNW-N&index=3) [推文情感的文本分析——使用Plotly进行探索性数据分析](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002FText%20Analytics%20of%20Tweet%20Emotion%20-%20EDA%20with%20Plotly.ipynb)\n\n- [![YouTube链接][logo]](https:\u002F\u002Fbit.ly\u002F3Nk0zRA) [使用TextBlob和Vader进行情感分析](https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Ftree\u002Fmain\u002FOther-Language_Models_BERT_related\u002Fsentiment_analysis_textblob_Vader.ipynb)","# LLM-FineTuning-Large-Language-Models 快速上手指南\n\n## 环境准备\n- **系统**：Linux \u002F macOS \u002F Windows WSL2  \n- **GPU**：≥8 GB 显存（Colab T4 即可跑通示例）  \n- **Python**：3.9+  \n- **CUDA**：11.8+（本地 GPU 训练时）  \n\n## 安装步骤\n1. 克隆仓库  \n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models.git\n   cd LLM-FineTuning-Large-Language-Models\n   ```\n\n2. 创建并激活虚拟环境  \n   ```bash\n   python -m venv llm-ft\n   source llm-ft\u002Fbin\u002Factivate   # Windows: llm-ft\\Scripts\\activate\n   ```\n\n3. 一键安装依赖（已内置清华镜像）  \n   ```bash\n   pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n   ```\n\n   若需 GPU 加速，额外安装：  \n   ```bash\n   pip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n   ```\n\n4. 登录 Hugging Face（获取私有\u002F gated 模型）  \n   ```bash\n   huggingface-cli login\n   ```\n\n## 基本使用\n以「Llama-3-8B 4bit ORPO 微调」为例，最快 3 步跑通：\n\n1. 打开 Colab（免本地 GPU）  \n   [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fblob\u002Fmain\u002FLlama_3_Finetuning_ORPO_with_Unsloth.ipynb)\n\n2. 运行前两段 Cell  \n   - 自动下载 Llama-3-8B-Instruct 4bit 量化权重  \n   - 加载自定义数据集（示例为 `mlabonne\u002Forpo-dpo-mix-40k`）\n\n3. 启动训练  \n   ```python\n   trainer.train()\n   ```\n   单卡 A100 约 30 分钟完成，模型自动保存至 `.\u002Foutput-orpo`。\n\n4. 本地推理  \n   ```python\n   from transformers import AutoTokenizer, AutoModelForCausalLM\n   tok = AutoTokenizer.from_pretrained(\".\u002Foutput-orpo\")\n   model = AutoModelForCausalLM.from_pretrained(\".\u002Foutput-orpo\", load_in_4bit=True)\n   print(tok.decode(model.generate(tok(\"你好，Llama-3！\", return_tensors=\"pt\").input_ids, max_new_tokens=64)[0]))\n   ```\n\n> 其余 Notebook 用法相同：点击 Colab 徽章 → 运行全部 Cell → 按需替换数据集与超参。","一家 20 人的跨境电商 SaaS 初创公司，需要在 2 周内把通用大模型改造成能自动撰写符合亚马逊 A9 算法、带关键词埋点的商品 Listing，并支持英\u002F德\u002F法三语。\n\n### 没有 LLM-FineTuning-Large-Language-Models 时\n- 工程师先用 GPT-4 写 Prompt，结果输出格式飘忽，关键词密度忽高忽低，运营同事手动改稿，平均 1 条 Listing 耗时 45 分钟  \n- 为了让模型懂品类术语，团队爬了 5 万条竞品标题当 Few-Shot 示例，Prompt 长度爆炸，单次调用成本 0.08 美元，日调用 3000 次直接烧掉 240 美元  \n- 多语言靠 Google Translate 后校对，法语变体词错误频出，德国站因标题违规被下架 17 次，客服疲于申诉  \n- 想把 Llama-3-8B 拉到本地省钱，但 80G A100 租金一天 200 美元，QLoRA 脚本自己写，调了 3 天连依赖都跑不通，项目延期风险飙升  \n\n### 使用 LLM-FineTuning-Large-Language-Models 后\n- 直接跑通 `Llama-3_Finetuning_on_custom_dataset_with_unsloth.ipynb`，用 4-bit 量化 + ORPO，在 1×RTX 4090 上 3 小时完成微调，输出格式固定为「标题-卖点-关键词」三段式，单条生成 3 秒，人工只需点审  \n- 训练数据仅用 2000 条自家高转化 Listing，LoRA 参数 16M，显存占用 11G，推理成本降到 0.0003 美元\u002F次，日省 230 美元  \n- 用同一套脚本换德语、法语语料再跑两轮，模型自动掌握当地搜索习惯，德国站 2 周零下架，法语转化率提升 18%  \n- 整个流程从环境搭建到上线只花 1.5 天，工程师把省下的时间去做库存预测模型，项目如期上线  \n\nLLM-FineTuning-Large-Language-Models 让初创团队在单张消费级显卡上就能低成本、快速地把通用大模型变成懂业务、懂多语种的“运营专家”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Frohan-paul_LLM-FineTuning-Large-Language-Models_91129fb9.png","rohan-paul","Rohan Paul","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Frohan-paul_dd6d9903.jpg","Machine Learning Engineer | Founder and writer of daily AI newsletter\r\nrohan-paul.com","https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Frohan-paul-ai","Bangalore",null,"rohanpaul_ai","https:\u002F\u002Frohan-paul.com\u002F","https:\u002F\u002Fgithub.com\u002Frohan-paul",[86,90],{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",98.7,{"name":91,"color":92,"percentage":93},"Python","#3572A5",1.3,566,138,"2026-03-26T11:43:18","Linux, macOS, Windows","需要 NVIDIA GPU；显存 ≥24 GB（34B 模型 QLoRA 量化后 24 GB 可跑，8B 模型 8 GB 可跑），CUDA 11.7+","最低 16 GB，推荐 32 GB+",{"notes":101,"python":102,"dependencies":103},"所有 Notebook 均可一键在 Google Colab 运行；本地需先安装 CUDA 驱动与 PyTorch GPU 版；首次运行会自动下载模型权重，体积 5 GB–70 GB 不等，建议预留 100 GB 以上磁盘空间","3.8+",[104,105,106,107,108,109,110,111,112,113],"torch>=2.0","transformers>=4.30","accelerate","peft","bitsandbytes","unsloth","trl","datasets","gradio","langchain",[13,26],[116,117,118,119,120,121,122,123,124,125,126,127],"gpt-3","gpt3-turbo","large-language-models","llama2","llm","llm-finetuning","llm-inference","llm-serving","llm-training","mistral-7b","open-source-llm","pytorch","2026-03-27T02:49:30.150509","2026-04-06T08:18:26.917415",[131],{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},6015,"在 Google Colab 中使用 TFAutoModelForSequenceClassification 时出现 ValueError: Could not interpret optimizer identifier 怎么办？","该错误通常是由于 TensorFlow 与 Transformers 版本不兼容导致。解决方案：\n1. 先卸载旧版本：\n   ```python\n   !pip uninstall -y tensorflow keras\n   ```\n2. 安装兼容版本：\n   ```python\n   !pip install tensorflow==2.11.0 transformers==4.21.0\n   ```\n3. 重启 Colab 运行时后，使用以下方式编译模型：\n   ```python\n   from transformers import TFAutoModelForSequenceClassification\n   import tensorflow as tf\n   \n   model = TFAutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=3)\n   optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=5e-5)  # 注意使用 legacy 版本\n   model.compile(\n       optimizer=optimizer,\n       loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n       metrics=[tf.metrics.SparseCategoricalAccuracy()],\n   )\n   ```","https:\u002F\u002Fgithub.com\u002Frohan-paul\u002FLLM-FineTuning-Large-Language-Models\u002Fissues\u002F4",[]]