[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-PKU-Alignment--align-anything":3,"tool-PKU-Alignment--align-anything":64},[4,17,27,35,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],"图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":10,"last_commit_at":33,"category_tags":34,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85013,"2026-04-06T11:09:19",[26,43,44,45,14,46,15,13,47],"数据工具","视频","插件","其他","音频",{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":23,"last_commit_at":54,"category_tags":55,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[14,26,13,15,46],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74963,"2026-04-06T11:16:39",[15,26,13,46],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":78,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":23,"env_os":102,"env_gpu":103,"env_ram":104,"env_deps":105,"category_tags":112,"github_topics":113,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":120,"updated_at":121,"faqs":122,"releases":152},4378,"PKU-Alignment\u002Falign-anything","align-anything","Align Anything: Training All-modality Model with Feedback","align-anything 是由北京大学对齐团队（PKU-Alignment）打造的开源框架，旨在帮助各类多模态大模型（涵盖文本、图像、音频、视频等）更好地遵循人类意图与价值观。它主要解决了当前多模态模型在训练过程中难以统一适配不同对齐算法、缺乏灵活微调方案以及评估标准不一的痛点。\n\n无论是希望快速复现前沿算法的研究人员，还是需要定制开发多模态应用的工程师，都能通过 align-anything 高效开展工作。其核心优势在于高度模块化的架构设计，让用户能轻松修改代码以适配特定任务。工具内置了 SFT、DPO、PPO 等多种主流对齐算法，并创新性地支持类 O1 推理训练及基于规则的强化学习。此外，它还集成了 InterMT 项目，提供了首个包含人类反馈的多轮交错偏好数据集，以及专用的 eval-anything 评估框架，方便开发者对模型进行全方位评测。配合简洁的多模态命令行接口，align-anything 让复杂的大模型对齐工作变得更加直观和易用。","\u003C!-- markdownlint-disable first-line-h1 -->\r\n\u003C!-- markdownlint-disable html -->\r\n\r\n\u003Cdiv align=\"center\">\r\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_c1de7cb29482.jpg\" width=\"390\"\u002F>\r\n  \u003Cdiv>&nbsp;\u003C\u002Fdiv>\r\n  \u003Cdiv align=\"center\">\r\n    \u003Cb>\u003Cfont size=\"5\">project website\u003C\u002Ffont>\u003C\u002Fb>\r\n    \u003Csup>\r\n      \u003Ca href=\"https:\u002F\u002Fspace.bilibili.com\u002F3493095748405551?spm_id_from=333.337.search-card.all.click\">\r\n        \u003Ci>\u003Cfont size=\"4\">HOT\u003C\u002Ffont>\u003C\u002Fi>\r\n      \u003C\u002Fa>\r\n    \u003C\u002Fsup>\r\n    &nbsp;&nbsp;&nbsp;&nbsp;\r\n    \u003Cb>\u003Cfont size=\"5\">PKU-Alignment Team\u003C\u002Ffont>\u003C\u002Fb>\r\n    \u003Csup>\r\n      \u003Ca href=\"https:\u002F\u002Fspace.bilibili.com\u002F3493095748405551?spm_id_from=333.337.search-card.all.click\">\r\n        \u003Ci>\u003Cfont size=\"4\">welcome\u003C\u002Ffont>\u003C\u002Fi>\r\n      \u003C\u002Fa>\r\n    \u003C\u002Fsup>\r\n  \u003C\u002Fdiv>\r\n  \u003Cdiv>&nbsp;\u003C\u002Fdiv>\r\n\r\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Falign-anything?logo=pypi)](https:\u002F\u002Fpypi.org\u002Fproject\u002Falign-anything)\r\n[![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FPKU-Alignment\u002Falign-anything?label=license)](#license)\r\n\r\n[📘Documentation](https:\u002F\u002Falign-anything.readthedocs.io\u002F) |\r\n[🛠️Quick Start](#quick-start) |\r\n[🚀Algorithms](#algorithms) |\r\n[👀Evaluation](.\u002Fprojects\u002Feval-anything) |\r\n[🤔Reporting Issues](#report-issues)\r\n\r\n\u003C\u002Fdiv>\r\n\r\n\u003Cdiv align=\"center\">\r\n\r\n[Our All-Modality Alignment Datasets](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002Falign-anything)\r\n\r\n\u003C\u002Fdiv>\r\n\r\nAlign-Anything aims to align any modality large models (any-to-any models) with human intentions and values. \r\n\r\n- **Highly Modular Framework** allowing users to easily modify and customize the code for different tasks (see [framework design](https:\u002F\u002Falign-anything.readthedocs.io\u002F)).\r\n- **Various Modality Model Fine-Tuning** for diverse multi-modal (image\u002Fvideo\u002Faudio) models (see [scripts](.\u002Fscripts)).\r\n- **Different Alignment Methods.** Different alignment algorithms, including SFT, DPO, PPO, and others.\r\n- **Multi-Modal CLI.** Multi-modal CLI for image, audio, and video modalities (see [multi-modal CLI](#multi-modal-cli)).\r\n- **O1-like Training.** O1-like training based on [DollyTails](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FDollyTails-12K) (see [scripts\u002Fllama_sft_o1.sh](.\u002Fscripts)).\r\n- **Rule-based RL.** Rule-based RL encouraged by [Deepseek-R1](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002FDeepSeek-R1).\r\n\r\n**Note:** We provide a [quick start guide](https:\u002F\u002Falign-anything.readthedocs.io\u002F) for users to quickly get the code structure and development details.\r\n\r\n## 📣 News\r\n\r\n### Roadmap\r\n\r\nWe are actively working on the following features:\r\n\r\n- ⚡️ **More Models:** Integrating cutting-edge models like the Qwen3-VL series.\r\n\r\n- 🚀 **More Inference Engines:** Adding support for high-performance engines like SGLang.\r\n\r\n- 🤖 **Advanced VLA Algorithms:** Implementing more VLA algorithms, including Safe-VLA.\r\n\r\n- 🧠 **Agent RL:** Expanding capabilities to support Agent-based Reinforcement Learning.\r\n\r\n- 🛠️ **Enhanced RLHF Features:** Upgrading our RL training framework with features like asynchronous rollout, vLLM sleep mode, and checkpoint-engine.\r\n\r\nStay tuned for more updates!\r\n  \r\n- **[2025.11.11]** 🎉🎉🎉 We now support the alignment fine-tuning of Qwen3 and Qwen3-MoE models!\r\n\r\n- **[2025.11.11]** 🎉🎉🎉 We integrate the **InterMT** project (NeurIPS 2025 Spotlight) into the main repository, featuring the first multi-turn interleaved preference alignment dataset with human feedback and InterMT-Bench for evaluating multi-turn multimodal interaction capabilities. Check out [InterMT](.\u002Fprojects\u002FInterMT) for more details.\r\n\r\n- **[2025.11.11]** 🛠️🛠️🛠️ We integrate the **eval-anything** evaluation framework into the main repository as a dedicated project for large-scale evaluation of any-to-any models. Check out [eval-anything](.\u002Fprojects\u002Feval-anything) for more details.\r\n\r\n- **[2025.04.14]** 📜📜📜 We release the tutorial on SFT training for `text-image-to-text` models. Check out the [cookbook_en](.\u002Fcookbooks\u002Fen\u002Ftext_image_to_text_sft.ipynb) (for English) and [cookbook_zh](.\u002Fcookbooks\u002Fzh\u002Ftext_image_to_text_sft.ipynb) (for Chinese).\r\n\r\n- **[2025.04.07]** 🥳🥳🥳 Align-Anything now serves as the homework platform for the PKU course [Large Language Models Basics and Alignment](https:\u002F\u002Fpku-llm.ai\u002F), supporting on both Nvidia GPU and Huawei Ascend NPU. The corresponding tutorial will be released soon!\r\n\r\n> Align-Anything目前已成为北京大学本硕博课程《大模型基础与对齐》的课程作业平台，支持在Nvidia GPU和华为昇腾NPU上进行训练与评估。对应教程将持续发布！\r\n\r\n- **[2025.03.31]** ✅✅✅ We enhance the installation process for both Nvidia GPU and Huawei Ascend NPU. Please refer to the [Quick Start](#quick-start) for details.\r\n\r\n- **[2025.03.31]** 🚀🚀🚀 We support wrapping the `actor` model with [vLLM engine](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm) for sequence generation in `text-to-text ppo` training. It greatly accelerates the ppo training process. Our results show that with vLLM engine, it only takes 22 minutes to finish ppo, while the baseline case needs ~150 minutes.\r\n\r\n    > 😊 Our implementation is encouraged by [OpenRLHF](https:\u002F\u002Fgithub.com\u002FOpenRLHF\u002FOpenRLHF), which is a great project for RLHF training.\r\n\r\n- **[2025.03.27]** 📜📜📜 We release the tutorial on DPO training for `text-to-text` models. Check out the [cookbook_en](.\u002Fcookbooks\u002Fen\u002Ftext_to_text_dpo.ipynb) (for English) and [cookbook_zh](.\u002Fcookbooks\u002Fzh\u002Ftext_to_text_dpo.ipynb) (for Chinese).\r\n\r\n- **[2025.03.15]** 📜📜📜 We release the tutorial for extending modality from `text-to-text` to `text-image-to-text` models. Check out the [cookbook_en](.\u002Fcookbooks\u002Fen\u002Fmodality_scaling.ipynb) (for English) and [cookbook_zh](.\u002Fcookbooks\u002Fzh\u002Fmodality_scaling.ipynb) (for Chinese).\r\n\r\n  > We will release other tutorials in the future. Stay tuned! 😊\r\n\r\n- **[2025.03.15]** We have supported seamless migration to Slurm clusters! Check out our example [here](#training-on-slurm) to get started.\r\n\r\n- **[2025.03.14]** 🛠️🛠️🛠️ We have supported Safe RLHF-V for `Text + Image -> Text` modality models.\r\n\r\n- **[2025.03.12]** 🛠️🛠️🛠️ We have supported resume training for DPO and SFT, see [here](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fpull\u002F153).\r\n\r\n- **[2025.03.11]** 🎉🎉🎉 We support the installation of **Huawei Ascend** dependencies through pre-set Docker image.\r\n\r\n- **[2025.03.02]** 🎉🎉🎉 We have implemented alignment training for Vision-Language-Action Models in embodied intelligence, see [VLA Trainer](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Ftree\u002Fmain\u002Falign_anything\u002Ftrainers\u002Ftext_video_to_action), with more features coming soon!\r\n\r\n- **[2025.02.28]** 🤝🤝🤝 We supported the training and inference of align-anything on Huawei Ascend NPU.\r\n\r\n  > 近期 align-anything 团队正在和华为昇腾团队积极联合开发，基于 VLLMs-Ascend 上的全模态推理和对齐微调。\r\n\r\n\r\n\u003Cdetails>\u003Csummary>More News\u003C\u002Fsummary>\r\n\r\n- **[2025.02.28]** 🤗🤗🤗 We open-sourced [🤗Align-DS-V](https:\u002F\u002Fhuggingface.co\u002FPKU-Alignment\u002FAlign-DS-V), an experimental vision-language model based on [DeepSeek-R1-Distill-Llama-8B](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1), which enhances reasoning by incorporating additional modalities into the language model. The model has already surpassed **18,000+** downloads!\r\n- **[2025.02.28]** We supported the alignment fine-tuning of DeepSeek’s Unified Multimodal Understanding and Generation Models, as well as the SFT and DPO of the [**Janus-Series**](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FJanus). You can find the examples in the `.\u002Fscripts` and `.\u002Fprojects\u002Fjanus` directories.\r\n- **[2025.02.19]** We supported the alignment methods **GRPO** used in DeepSeek R1. See [GRPO Trainer](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fblob\u002Fmain\u002Falign_anything\u002Ftrainers\u002Ftext_to_text\u002Fgrpo.py).\r\n- **[2025.01.21]** We supported the alignment fine-tuning of **MiniCPM-o** (audio & image), also included in [the official repository’s README recommendations](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FMiniCPM-o#with-align-anything-).\r\n- **[2025.01.17]** 🔥🔥🔥 We supported the fine-tuning of **O1-like reasoning in the text2text modality** (see [DollyTails](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FDollyTails-12K)), with multimodal and additional modalities coming soon!\r\n- **[2024.10.11]** We supported the alignment fine-tuning of the latest **Emu3** model.\r\n- **[2024.08.29]** 💡💡💡 We supported learning from language feedback (different from binary feedback). For more details, see [lang-feedback](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Ftree\u002Fmain\u002Fprojects\u002Flang_feedback).\r\n- **[2024.10.10]** We support SFT for `Any -> Any` modality models Emu3.\r\n- **[2024.09.24]** We support SFT, DPO, RM and PPO for `Text + Video -> Text` modality models.\r\n- **[2024.09.13]** We support SFT, DPO, RM and PPO for `Text + Audio -> Text` modality models.\r\n- **[2024.08.17]** We support DPO and PPO for `Text+Image -> Text+Image` modality models.\r\n- **[2024.08.15]** We support a new function in the evaluation module: the `models_pk` script in [here](.\u002Fscripts\u002Fmodels_pk.sh), which enables comparing the performance of two models across different benchmarks.\r\n- **[2024.08.06]** We restructure the framework to support any modality evaluation and the supported benchmark list is [here](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Ftree\u002Fmain\u002Falign_anything\u002Fevaluation\u002Fbenchmarks).\r\n- **[2024.08.06]** We support `Text+Image -> Text+Image` modality for the SFT trainer and Chameleon models.\r\n- **[2024.07.23]** We support `Text -> Image`, `Text -> Audio`, and `Text -> Video` modalities for the SFT trainer and DPO trainer.\r\n- **[2024.07.22]** We support the **Chameleon** model for the SFT trainer and DPO trainer!\r\n- **[2024.07.17]** We open-source the Align-Anything-Instruction-100K dataset for text modality. This dataset is available in both [English](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FAlign-Anything-Instruction-100K) and [Chinese](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FAlign-Anything-Instruction-100K-zh) versions, each sourced from different data sets and meticulously refined for quality by GPT-4.\r\n- **[2024.07.14]** We open-source the align-anything framework.\r\n\r\n\u003C\u002Fdetails>\r\n\r\n## Quick Start\r\n\r\n### Easy Installation\r\n\r\n```bash\r\n# clone the repository\r\ngit clone git@github.com:PKU-Alignment\u002Falign-anything.git\r\ncd align-anything\r\n\r\n# create virtual env\r\nconda create -n align-anything python==3.11\r\nconda activate align-anything\r\n```\r\n\r\n#### On Nvidia GPU\r\n\r\n- **`[Optional]`** We recommend installing [CUDA](https:\u002F\u002Fanaconda.org\u002Fnvidia\u002Fcuda) in the conda environment and set the environment variable.\r\n\r\n```bash\r\n# We tested on the H800 computing cluster, and this version of CUDA works well.\r\n# You can adjust this version according to the actual situation of the computing cluster.\r\n\r\nconda install nvidia\u002Flabel\u002Fcuda-12.2.0::cuda\r\nexport CUDA_HOME=$CONDA_PREFIX\r\n```\r\n\r\n> If your CUDA installed in a different location, such as `\u002Fusr\u002Flocal\u002Fcuda\u002Fbin\u002Fnvcc`, you can set the environment variables as follows:\r\n\r\n```bash\r\nexport CUDA_HOME=\"\u002Fusr\u002Flocal\u002Fcuda\"\r\n```\r\n\r\nFinally, install `align-anything` by:\r\n\r\n```bash\r\npip3 install -e .\r\n\r\npip3 install vllm==0.7.2 # to run ppo on vllm engine\r\n```\r\n\r\n#### On Huawei Ascend NPU\r\n\r\nYou can build on Huawei Ascend NPU by simply:\r\n\r\n```bash\r\npip3 install -e .[ascend]\r\n```\r\n\r\nThe current test environment for Ascend is:\r\n\r\n- Python 3.10.6\r\n- CANN 8.0.rc3\r\n- Architecture: aarch64\r\n- Hardware: 8x Ascend-SNT9B ARM (192 cores, 1536GB memory)\r\n\r\n\u003Cdetails>\r\n  \u003Csummary>[Optional] Install ascend dependencies using our docker image\u003C\u002Fsummary>\r\n\r\n1. **Current Ascend Machine Environment Configuration**\r\n   The current environment configuration for the Ascend Machine is as follows:\r\n\r\n   ```\r\n   - Python version: 3.10.6\r\n   - CANN version: 8.0.rc3\r\n   - Architecture: aarch64\r\n   - Hardware: 8x Ascend-SNT9B ARM (192 cores, 1536GB memory)\r\n   - Ascend Driver Version: 23.0.7\r\n   - AscendHAL Version: 7.35.19\r\n   - AICPU Version: 1.0\r\n   - TDT Version: 1.0\r\n   - Log Version: 1.0\r\n   - Profiler Version: 2.0\r\n   - DVPP Kernels Version: 1.1\r\n   - TSFW Version: 1.0\r\n   - Inner Version: V100R001C15SPC012B220\r\n   - Compatible Versions: V100R001C30, V100R001C13, V100R001C15\r\n   - Compatible Firmware Versions: [7.0.0, 7.1.99]\r\n   - Package Version: 23.0.7\r\n   ```\r\n\r\n2. **Create the Docker Container**\r\n   To get started with the pre-configured environment, you can use the `setup_docker.sh` script located in the `.\u002Fscripts` directory to pull the Docker image and create a container with all necessary environments set up:\r\n\r\n   ```\r\n   cd scripts\r\n   bash setup_docker.sh\r\n   ```\r\n\r\n   This will automatically pull the Docker image and create a Docker container where all the dependencies and configurations for running the framework are already set up.\r\n\r\n3. **Warning**\r\n   **Environment Compatibility**: The environment mentioned above is tested and verified to work. If you attempt to run the setup on other environments, you may encounter issues. In such cases, you will need to perform debugging and adjustments yourself to ensure compatibility with your specific environment.\r\n\r\n\u003C\u002Fdetails>\r\n\r\n\r\nIf you encounter any issues, please refer to the [FAQ](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fdiscussions\u002F167) for solutions.\r\n\r\n\u003Cdetails>\r\n\u003Csummary>[Optional] Other Dependencies\u003C\u002Fsummary>\r\n\r\n- `pip install -e .[text-to-audio]`: Install the text-to-audio dependencies.\r\n- `pip install -e .[minicpmv]`: Install the minicpmv dependencies.\r\n- `pip install -e .[minicpmo]`: Install the minicpmo dependencies.\r\n\r\n\u003C\u002Fdetails>\r\n\r\n### Training\r\n\r\nWe provide some scripts for quick start, you can find them in the `.\u002Fscripts` directory. These scripts would automatically download the model and dataset, and run the training or evaluation.\r\n\r\nFor example, `scripts\u002Fllava\u002Fllava_dpo.sh` is the script for `Text + Image -> Text` modality, you can run it by:\r\n\r\n```bash\r\ncd scripts\r\nbash llava\u002Fllava_dpo.sh\r\n```\r\n\r\n**Note:** The scripts will automatically download the model and dataset from huggingface. If you are prohibited from the internet, please try to use the `HF Mirror`:\r\n\r\n```bash\r\nexport HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com\r\n```\r\n\r\n#### Training on Slurm\r\n\r\n> We fully support seamless migration to Slurm. If you plan to run training on a Slurm-managed cluster, we invite you to use our example Slurm training script:\r\n>\r\n> ```bash\r\n> cd scripts\r\n> bash slurm\u002Fslurm_llava_dpo.sh\r\n> ```\r\n>\r\n> This script is pre-configured with suitable Slurm parameters. You only need to adjust the settings (such as the `job name`, `partition`, `account`, `path` and `resource allocations`) to match your cluster configuration.\r\n\r\n## Algorithms\r\n\r\nWe support basic alignment algorithms for different modalities, each of which may involve additional algorithms. For instance, in the text modality, we have also implemented SimPO, KTO, and others.\r\n\r\n| Modality                           | SFT | RM  | DPO | PPO |\r\n| ---------------------------------- | --- | --- | --- | --- |\r\n| `Text -> Text (t2t)`               | ✔️  | ✔️  | ✔️  | ✔️  |\r\n| `Text+Image -> Text (ti2t)`        | ✔️  | ✔️  | ✔️  | ✔️  |\r\n| `Text+Image -> Text+Image (ti2ti)` | ✔️  | ✔️  | ✔️  | ✔️  |\r\n| `Text+Audio -> Text (ta2t)`        | ✔️  | ✔️  | ✔️  | ✔️  |\r\n| `Text+Video -> Text (tv2t)`        | ✔️  | ✔️  | ✔️  | ✔️  |\r\n| `Text -> Image (t2i)`              | ✔️  | ⚒️  | ✔️  | ⚒️  |\r\n| `Text -> Video (t2v)`              | ✔️  | ⚒️  | ✔️  | ⚒️  |\r\n| `Text -> Audio (t2a)`              | ✔️  | ⚒️  | ✔️  | ⚒️  |\r\n| `Text+Video -> Action (tv2act)`    | ✔️  | ⚒️  | ⚒️  | ⚒️  |\r\n\r\n## New Feature: Align VLA\r\n\r\n|              | \u003Cdetails>\u003Csummary>prompt\u003C\u002Fsummary>navigate to a basketball\u003C\u002Fdetails>                                          | \u003Cdetails>\u003Csummary>prompt\u003C\u002Fsummary>find to a basketball\u003C\u002Fdetails>                                              | \u003Cdetails>\u003Csummary>prompt\u003C\u002Fsummary>locate a vase.\u003C\u002Fdetails>                                                    | \u003Cdetails>\u003Csummary>prompt\u003C\u002Fsummary>find a spray bottle and pick up that spray bottle\u003C\u002Fdetails>                 |\r\n| ------------ | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |\r\n| Baseline     | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_f45c86e484dd.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_a777bda7c336.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_9ad1757f8bcd.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_ea541fb165f4.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> |\r\n| **AlignVLA** | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_ff06655c3204.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_028555eb6cfc.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_ef2cc12f4ad8.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_d7a8452978f9.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  |\r\n\r\n> Alignment fine-tuning can significantly enhance the security performance of the VLA model.\r\n\r\n### Downloading the training data\r\n\r\n```bash\r\npython -m align_anything.utils.spoc_utils.download_training_data --save_dir \u002Fpath\u002Fto\u002Fdata  --types fifteen\r\n```\r\n\r\nThen decompress the compressed data package.\r\n\r\n### Training\r\n\r\n\r\nmodify ``HOME_PREFIX`` in ``align-anything\u002Fscripts\u002Fvla\u002Fspoc_sft.sh`` to your local data path.\r\n\r\n\r\n```bash\r\nbash scripts\u002Fvla\u002Fspoc_sft.sh\r\n```\r\n\r\n\r\n## Citation\r\n\r\nPlease cite the repo if you find the data or code in this repo useful 😊\r\n\r\n```bibtex\r\n@inproceedings{ji2024align,\r\n  title={Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback},\r\n  author={Jiaming Ji and Jiayi Zhou and Hantao Lou and Boyuan Chen and Donghai Hong and Xuyao Wang and Wenqi Chen and Kaile Wang and Rui Pan and Jiahao Li and Mohan Wang and Josef Dai and Tianyi Qiu and Hua Xu and Dong Li and Weipeng Chen and Jun Song and Bo Zheng and Yaodong Yang},\r\n  year={2024},\r\n  url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15838}\r\n}\r\n```\r\n\r\n## Report Issues\r\n\r\nIf you have any questions in the process of using align-anything, don't hesitate to ask your questions on [the GitHub issue page](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002Fnew\u002Fchoose), we will reply to you in 2-3 working days.\r\n\r\n# License\r\n\r\nalign-anything is released under Apache License 2.0.\r\n\r\n","\u003C!-- markdownlint-disable first-line-h1 -->\r\n\u003C!-- markdownlint-disable html -->\r\n\r\n\u003Cdiv align=\"center\">\r\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_c1de7cb29482.jpg\" width=\"390\"\u002F>\r\n  \u003Cdiv>&nbsp;\u003C\u002Fdiv>\r\n  \u003Cdiv align=\"center\">\r\n    \u003Cb>\u003Cfont size=\"5\">项目官网\u003C\u002Ffont>\u003C\u002Fb>\r\n    \u003Csup>\r\n      \u003Ca href=\"https:\u002F\u002Fspace.bilibili.com\u002F3493095748405551?spm_id_from=333.337.search-card.all.click\">\r\n        \u003Ci>\u003Cfont size=\"4\">热门\u003C\u002Ffont>\u003C\u002Fi>\r\n      \u003C\u002Fa>\r\n    \u003C\u002Fsup>\r\n    &nbsp;&nbsp;&nbsp;&nbsp;\r\n    \u003Cb>\u003Cfont size=\"5\">北大对齐团队\u003C\u002Ffont>\u003C\u002Fb>\r\n    \u003Csup>\r\n      \u003Ca href=\"https:\u002F\u002Fspace.bilibili.com\u002F3493095748405551?spm_id_from=333.337.search-card.all.click\">\r\n        \u003Ci>\u003Cfont size=\"4\">欢迎\u003C\u002Ffont>\u003C\u002Fi>\r\n      \u003C\u002Fa>\r\n    \u003C\u002Fsup>\r\n  \u003C\u002Fdiv>\r\n  \u003Cdiv>&nbsp;\u003C\u002Fdiv>\r\n\r\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Falign-anything?logo=pypi)](https:\u002F\u002Fpypi.org\u002Fproject\u002Falign-anything)\r\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FPKU-Alignment\u002Falign-anything?label=license)](#license)\r\n\r\n[📘文档](https:\u002F\u002Falign-anything.readthedocs.io\u002F) |\r\n[🛠️快速入门](#quick-start) |\r\n[🚀算法](#algorithms) |\r\n[👀评估](.\u002Fprojects\u002Feval-anything) |\r\n[🤔提交问题](#report-issues)\r\n\r\n\u003C\u002Fdiv>\r\n\r\n\u003Cdiv align=\"center\">\r\n\r\n[我们的全模态对齐数据集](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002Falign-anything)\r\n\r\n\u003C\u002Fdiv>\r\n\r\nAlign-Anything 致力于将任意模态的大模型（任意模态之间的模型）与人类意图和价值观对齐。\r\n\r\n- **高度模块化的框架**，允许用户轻松修改和定制代码以适应不同任务（详见 [框架设计](https:\u002F\u002Falign-anything.readthedocs.io\u002F)）。\r\n- **多种模态模型的微调**，适用于多样化的多模态（图像\u002F视频\u002F音频）模型（详见 [脚本](.\u002Fscripts)）。\r\n- **不同的对齐方法**。包括 SFT、DPO、PPO 等多种对齐算法。\r\n- **多模态命令行工具**。支持图像、音频和视频模态的多模态命令行工具（详见 [多模态命令行工具](#multi-modal-cli)）。\r\n- **类似 O1 的训练**。基于 [DollyTails](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FDollyTails-12K) 的类似 O1 的训练（详见 [脚本\u002Fllama_sft_o1.sh](.\u002Fscripts)）。\r\n- **基于规则的强化学习**。受到 [Deepseek-R1](https:\u002F\u002Fhuggingface.co\u002Fdeepseek-ai\u002FDeepSeek-R1) 的启发而采用的基于规则的强化学习。\r\n\r\n**注**：我们提供了[快速入门指南](https:\u002F\u002Falign-anything.readthedocs.io\u002F)，帮助用户迅速了解代码结构及开发细节。\r\n\r\n## 📣 新闻\n\n### 路线图\n\n我们正在积极开发以下功能：\n\n- ⚡️ **更多模型**：集成如通义千问3-VL系列等前沿模型。\n\n- 🚀 **更多推理引擎**：增加对SGLang等高性能引擎的支持。\n\n- 🤖 **先进的VLA算法**：实现更多VLA算法，包括Safe-VLA。\n\n- 🧠 **智能体强化学习**：扩展能力以支持基于智能体的强化学习。\n\n- 🛠️ **增强的RLHF功能**：升级我们的RL训练框架，新增异步rollout、vLLM休眠模式和checkpoint-engine等功能。\n\n敬请期待更多更新！\n\n- **[2025.11.11]** 🎉🎉🎉 我们现已支持通义千问3及通义千问3-MoE模型的对齐微调！\n\n- **[2025.11.11]** 🎉🎉🎉 我们将**InterMT**项目（NeurIPS 2025 Spotlight）整合到主仓库中，该项目包含首个多轮交错式偏好对齐数据集，并附有人类反馈以及用于评估多轮多模态交互能力的InterMT-Bench。更多详情请参阅[InterMT](.\u002Fprojects\u002FInterMT)。\n\n- **[2025.11.11]** 🛠️🛠️🛠️ 我们将**eval-anything**评估框架作为专门用于任意模型大规模评估的项目，整合到主仓库中。更多详情请参阅[eval-anything](.\u002Fprojects\u002Feval-anything)。\n\n- **[2025.04.14]** 📜📜📜 我们发布了针对`文本-图像-文本`模型的SFT训练教程。请查看[cookbook_en](.\u002Fcookbooks\u002Fen\u002Ftext_image_to_text_sft.ipynb)（英文版）和[cookbook_zh](.\u002Fcookbooks\u002Fzh\u002Ftext_image_to_text_sft.ipynb)（中文版）。\n\n- **[2025.04.07]** 🥳🥳🥳 Align-Anything现已成为北京大学课程《大模型基础与对齐》的作业平台，同时支持Nvidia GPU和华为昇腾NPU。相关教程即将发布！\n\n> Align-Anything目前已成为北京大学本硕博课程《大模型基础与对齐》的课程作业平台，支持在Nvidia GPU和华为昇腾NPU上进行训练与评估。对应教程将持续发布！\n\n- **[2025.03.31]** ✅✅✅ 我们优化了Nvidia GPU和华为昇腾NPU的安装流程。详情请参阅[快速入门](#quick-start)。\n\n- **[2025.03.31]** 🚀🚀🚀 我们支持在`文本-文本`ppo训练中使用[vLLM引擎](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm)包装`actor`模型进行序列生成。这极大地加速了ppo训练过程。实验结果显示，使用vLLM引擎仅需22分钟即可完成ppo训练，而基准情况则需要约150分钟。\n\n    > 😊 我们的实现得到了[OpenRLHF](https:\u002F\u002Fgithub.com\u002FOpenRLHF\u002FOpenRLHF)的鼓励，这是一个非常优秀的RLHF训练项目。\n\n- **[2025.03.27]** 📜📜📜 我们发布了针对`文本-文本`模型的DPO训练教程。请查看[cookbook_en](.\u002Fcookbooks\u002Fen\u002Ftext_to_text_dpo.ipynb)（英文版）和[cookbook_zh](.\u002Fcookbooks\u002Fzh\u002Ftext_to_text_dpo.ipynb)（中文版）。\n\n- **[2025.03.15]** 📜📜📜 我们发布了从`文本-文本`扩展到`文本-图像-文本`模型的教程。请查看[cookbook_en](.\u002Fcookbooks\u002Fen\u002Fmodality_scaling.ipynb)（英文版）和[cookbook_zh](.\u002Fcookbooks\u002Fzh\u002Fmodality_scaling.ipynb)（中文版）。\n\n  > 我们将在未来发布更多教程。敬请关注！😊\n\n- **[2025.03.15]** 我们已支持无缝迁移到Slurm集群！请参阅我们的示例[这里](#training-on-slurm)开始使用。\n\n- **[2025.03.14]** 🛠️🛠️🛠️ 我们已支持针对`文本+图像→文本`模态模型的Safe RLHF-V。\n\n- **[2025.03.12]** 🛠️🛠️🛠️ 我们已支持DPO和SFT的断点续训，请参阅[此处](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fpull\u002F153)。\n\n- **[2025.03.11]** 🎉🎉🎉 我们支持通过预置Docker镜像安装**华为昇腾**依赖项。\n\n- **[2025.03.02]** 🎉🎉🎉 我们实现了具身智能中视觉-语言-动作模型的对齐训练，详见[VLA Trainer](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Ftree\u002Fmain\u002Falign_anything\u002Ftrainers\u002Ftext_video_to_action)，更多功能即将推出！\n\n- **[2025.02.28]** 🤝🤝🤝 我们支持在华为昇腾NPU上进行align-anything的训练和推理。\n\n  > 近期 align-anything 团队正在和华为昇腾团队积极联合开发，基于 VLLMs-Ascend 上的全模态推理和对齐微调。\n\n\n\u003Cdetails>\u003Csummary>更多新闻\u003C\u002Fsummary>\n\n- **[2025.02.28]** 🤗🤗🤗 我们开源了[🤗Align-DS-V](https:\u002F\u002Fhuggingface.co\u002FPKU-Alignment\u002FAlign-DS-V)，这是一款基于[DeepSeek-R1-Distill-Llama-8B](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1)的实验性视觉-语言模型，通过将额外模态融入语言模型来增强推理能力。该模型下载量已超过**18,000+**次！\n- **[2025.02.28]** 我们支持DeepSeek统一多模态理解与生成模型的对齐微调，以及[**Janus系列**](https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FJanus)的SFT和DPO训练。相关示例可在`.\u002Fscripts`和`.\u002Fprojects\u002Fjanus`目录下找到。\n- **[2025.02.19]** 我们支持DeepSeek R1中使用的对齐方法**GRPO**。详情请参阅[GRPO Trainer](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fblob\u002Fmain\u002Falign_anything\u002Ftrainers\u002Ftext_to_text\u002Fgrpo.py)。\n- **[2025.01.21]** 我们支持**MiniCPM-o**（音频&图像）的对齐微调，该模型也被列入[官方仓库README推荐列表](https:\u002F\u002Fgithub.com\u002FOpenBMB\u002FMiniCPM-o#with-align-anything-)。\n- **[2025.01.17]** 🔥🔥🔥 我们支持在`文本2文本`模态中进行O1-like推理的微调（详见[DollyTails](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FDollyTails-12K)），多模态及其他模态功能也将很快推出！\n- **[2024.10.11]** 我们支持最新**Emu3**模型的对齐微调。\n- **[2024.08.29]** 💡💡💡 我们支持基于语言反馈的学习（区别于二元反馈）。更多详情请参阅[lang-feedback](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Ftree\u002Fmain\u002Fprojects\u002Flang_feedback)。\n- **[2024.10.10]** 我们支持Emu3模型的`任意→任意`模态SFT训练。\n- **[2024.09.24]** 我们支持`文本+视频→文本`模态模型的SFT、DPO、RM和PPO训练。\n- **[2024.09.13]** 我们支持`文本+音频→文本`模态模型的SFT、DPO、RM和PPO训练。\n- **[2024.08.17]** 我们支持`文本+图像→文本+图像`模态模型的DPO和PPO训练。\n- **[2024.08.15]** 我们在评估模块中新增了一项功能：位于[此处](.\u002Fscripts\u002Fmodels_pk.sh)的`models_pk`脚本，可用于比较两款模型在不同基准测试中的表现。\n- **[2024.08.06]** 我们重构了框架，以支持任意模态的评估，受支持的基准列表请见[这里](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Ftree\u002Fmain\u002Falign_anything\u002Fevaluation\u002Fbenchmarks)。\n- **[2024.08.06]** 我们支持SFT训练器和Chameleon模型处理`文本+图像→文本+图像`模态。\n- **[2024.07.23]** 我们支持SFT训练器和DPO训练器处理`文本→图像`、`文本→音频`和`文本→视频`模态。\n- **[2024.07.22]** 我们支持SFT训练器和DPO训练器处理**Chameleon**模型！\n- **[2024.07.17]** 我们开源了面向文本模态的Align-Anything-Instruction-100K数据集。该数据集提供[英文版](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FAlign-Anything-Instruction-100K)和[中文版](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002FAlign-Anything-Instruction-100K-zh)两种版本，分别来自不同的数据源，并由GPT-4精心筛选和优化质量。\n- **[2024.07.14]** 我们开源了align-anything框架。\n\n\u003C\u002Fdetails>\n\n## 快速入门\n\n### 轻松安装\n\n```bash\n# 克隆仓库\ngit clone git@github.com:PKU-Alignment\u002Falign-anything.git\ncd align-anything\n\n# 创建虚拟环境\nconda create -n align-anything python==3.11\nconda activate align-anything\n```\n\n#### 在 NVIDIA GPU 上\n\n- **`[可选]`** 我们建议在 Conda 环境中安装 [CUDA](https:\u002F\u002Fanaconda.org\u002Fnvidia\u002Fcuda)，并设置环境变量。\n\n```bash\n# 我们在 H800 计算集群上进行了测试，此版本的 CUDA 运行良好。\n# 您可以根据实际计算集群的情况调整版本。\n\nconda install nvidia\u002Flabel\u002Fcuda-12.2.0::cuda\nexport CUDA_HOME=$CONDA_PREFIX\n```\n\n> 如果您的 CUDA 安装在其他位置，例如 `\u002Fusr\u002Flocal\u002Fcuda\u002Fbin\u002Fnvcc`，您可以按如下方式设置环境变量：\n\n```bash\nexport CUDA_HOME=\"\u002Fusr\u002Flocal\u002Fcuda\"\n```\n\n最后，通过以下命令安装 `align-anything`：\n\n```bash\npip3 install -e .\n\npip3 install vllm==0.7.2 # 用于在 vllm 引擎上运行 PPO\n```\n\n#### 在华为 Ascend NPU 上\n\n您只需简单地执行以下命令即可在华为 Ascend NPU 上构建：\n\n```bash\npip3 install -e .[ascend]\n```\n\n目前 Ascend 的测试环境为：\n\n- Python 3.10.6\n- CANN 8.0.rc3\n- 架构：aarch64\n- 硬件：8x Ascend-SNT9B ARM（192 核，1536GB 内存）\n\n\u003Cdetails>\n  \u003Csummary>[可选] 使用我们的 Docker 镜像安装 Ascend 依赖\u003C\u002Fsummary>\n\n1. **当前 Ascend 机器环境配置**\n   当前 Ascend 机器的环境配置如下：\n\n   ```\n   - Python 版本：3.10.6\n   - CANN 版本：8.0.rc3\n   - 架构：aarch64\n   - 硬件：8x Ascend-SNT9B ARM（192 核，1536GB 内存）\n   - Ascend 驱动程序版本：23.0.7\n   - AscendHAL 版本：7.35.19\n   - AICPU 版本：1.0\n   - TDT 版本：1.0\n   - 日志版本：1.0\n   - 性能分析器版本：2.0\n   - DVPP 内核版本：1.1\n   - TSFW 版本：1.0\n   - 内部版本：V100R001C15SPC012B220\n   - 兼容版本：V100R001C30、V100R001C13、V100R001C15\n   - 兼容固件版本：[7.0.0, 7.1.99]\n   - 软件包版本：23.0.7\n   ```\n\n2. **创建 Docker 容器**\n   为了快速启动预配置环境，您可以使用位于 `.\u002Fscripts` 目录下的 `setup_docker.sh` 脚本，拉取 Docker 镜像并创建一个包含所有必要环境的容器：\n\n   ```bash\n   cd scripts\n   bash setup_docker.sh\n   ```\n\n   这将自动拉取 Docker 镜像并创建一个 Docker 容器，其中已预先配置好运行框架所需的所有依赖和设置。\n\n3. **警告**\n   **环境兼容性**：上述环境经过测试并验证可用。如果您尝试在其他环境中运行该设置，可能会遇到问题。在这种情况下，您需要自行进行调试和调整，以确保与您的特定环境兼容。\n\n\u003C\u002Fdetails>\n\n如果遇到任何问题，请参阅 [FAQ](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fdiscussions\u002F167) 获取解决方案。\n\n\u003Cdetails>\n\u003Csummary>[可选] 其他依赖\u003C\u002Fsummary>\n\n- `pip install -e .[text-to-audio]`：安装文本转音频依赖。\n- `pip install -e .[minicpmv]`：安装 minicpmv 依赖。\n- `pip install -e .[minicpmo]`：安装 minicpmo 依赖。\n\n\u003C\u002Fdetails>\n\n### 训练\n\n我们提供了一些快速入门脚本，您可以在 `.\u002Fscripts` 目录中找到它们。这些脚本会自动下载模型和数据集，并运行训练或评估。\n\n例如，`scripts\u002Fllava\u002Fllava_dpo.sh` 是用于 `文本 + 图像 -> 文本` 模态的脚本，您可以通过以下命令运行它：\n\n```bash\ncd scripts\nbash llava\u002Fllava_dpo.sh\n```\n\n**注意**：这些脚本会自动从 Hugging Face 下载模型和数据集。如果您无法访问互联网，请尝试使用 `HF 镜像`：\n\n```bash\nexport HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com\n```\n\n#### 在 Slurm 上训练\n\n> 我们完全支持无缝迁移到 Slurm。如果您计划在 Slurm 管理的集群上运行训练，我们诚邀您使用我们的示例 Slurm 训练脚本：\n\n```bash\ncd scripts\nbash slurm\u002Fslurm_llava_dpo.sh\n```\n\n该脚本已预先配置了合适的 Slurm 参数。您只需根据您的集群配置调整设置（如作业名称、分区、账户、路径和资源分配）即可。\n\n## 算法\n\n我们支持针对不同模态的基本对齐算法，每种模态可能还涉及额外的算法。例如，在文本模态中，我们还实现了 SimPO、KTO 等。\n\n| 模态                           | SFT | RM  | DPO | PPO |\n| ---------------------------------- | --- | --- | --- | --- |\n| `文本 -> 文本 (t2t)`               | ✔️  | ✔️  | ✔️  | ✔️  |\n| `文本+图像 -> 文本 (ti2t)`        | ✔️  | ✔️  | ✔️  | ✔️  |\n| `文本+图像 -> 文本+图像 (ti2ti)` | ✔️  | ✔️  | ✔️  | ✔️  |\n| `文本+音频 -> 文本 (ta2t)`        | ✔️  | ✔️  | ✔️  | ✔️  |\n| `文本+视频 -> 文本 (tv2t)`        | ✔️  | ✔️  | ✔️  | ✔️  |\n| `文本 -> 图像 (t2i)`              | ✔️  | ⚒️  | ✔️  | ⚒️  |\n| `文本 -> 视频 (t2v)`              | ✔️  | ⚒️  | ✔️  | ⚒️  |\n| `文本 -> 音频 (t2a)`              | ✔️  | ⚒️  | ✔️  | ⚒️  |\n| `文本+视频 -> 行动 (tv2act)`      | ✔️  | ⚒️  | ⚒️  | ⚒️  |\n\n## 新功能：Align VLA\n\n|              | \u003Cdetails>\u003Csummary>提示\u003C\u002Fsummary>导航到一个篮球\u003C\u002Fdetails>                                          | \u003Cdetails>\u003Csummary>提示\u003C\u002Fsummary>找到一个篮球\u003C\u002Fdetails>                                              | \u003Cdetails>\u003Csummary>提示\u003C\u002Fsummary>定位一个花瓶。\u003C\u002Fdetails>                                                    | \u003Cdetails>\u003Csummary>提示\u003C\u002Fsummary>找到一个喷雾瓶并拿起那个喷雾瓶\u003C\u002Fdetails>                 |\r\n| ------------ | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |\r\n| 基线       | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_f45c86e484dd.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_a777bda7c336.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_9ad1757f8bcd.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_ea541fb165f4.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\"> |\r\n| **AlignVLA** | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_ff06655c3204.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_028555eb6cfc.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_ef2cc12f4ad8.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_readme_d7a8452978f9.gif\" alt=\"Image 8\" style=\"max-width: 100%; height: auto;\">  |\r\n\r\n> 对齐微调可以显著提升VLA模型的安全性能。\n\n### 下载训练数据\n\n```bash\npython -m align_anything.utils.spoc_utils.download_training_data --save_dir \u002Fpath\u002Fto\u002Fdata  --types fifteen\n```\n\n然后解压压缩的数据包。\n\n### 训练\n\n\n将 ``align-anything\u002Fscripts\u002Fvla\u002Fspoc_sft.sh`` 中的 ``HOME_PREFIX`` 修改为你本地的数据路径。\n\n\n```bash\nbash scripts\u002Fvla\u002Fspoc_sft.sh\n```\n\n\n## 引用\n\n如果您觉得本仓库中的数据或代码有用，请引用该仓库 😊\n\n```bibtex\n@inproceedings{ji2024align,\n  title={Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback},\n  author={Jiaming Ji and Jiayi Zhou and Hantao Lou and Boyuan Chen and Donghai Hong and Xuyao Wang and Wenqi Chen and Kaile Wang and Rui Pan and Jiahao Li and Mohan Wang and Josef Dai and Tianyi Qiu and Hua Xu and Dong Li and Weipeng Chen and Jun Song and Bo Zheng and Yaodong Yang},\n  year={2024},\n  url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15838}\n}\n``` \n\n## 报告问题\n\n如果您在使用align-anything的过程中有任何疑问，请随时在[GitHub问题页面](https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002Fnew\u002Fchoose)上提出，我们将在2-3个工作日内回复您。\n\n# 许可证\n\nalign-anything 采用 Apache License 2.0 许可证发布。","# Align-Anything 快速上手指南\n\nAlign-Anything 是由北京大学对齐团队（PKU-Alignment）开发的全模态大模型对齐框架。它支持图像、视频、音频等多种模态的大模型与人类意图及价值观进行对齐，提供 SFT、DPO、PPO、GRPO 等多种对齐算法，并原生支持 Nvidia GPU 和华为昇腾 NPU。\n\n## 环境准备\n\n### 系统要求\n- **操作系统**: Linux (推荐 Ubuntu)\n- **Python 版本**: 3.11 (Nvidia GPU) \u002F 3.10.6 (华为昇腾 NPU)\n- **硬件支持**:\n  - **Nvidia GPU**: 支持 CUDA 12.2+ (已在 H800 集群测试验证)\n  - **华为昇腾 NPU**: 支持 CANN 8.0.rc3+ (架构 aarch64，如 Ascend-SNT9B)\n\n### 前置依赖\n- Conda (用于管理虚拟环境)\n- Git (用于克隆代码库)\n- [可选] CUDA Toolkit (若未预装，建议在 Conda 环境中安装)\n\n## 安装步骤\n\n### 1. 克隆项目并创建虚拟环境\n\n```bash\n# 克隆仓库\ngit clone git@github.com:PKU-Alignment\u002Falign-anything.git\ncd align-anything\n\n# 创建并激活虚拟环境 (推荐使用 Python 3.11)\nconda create -n align-anything python==3.11\nconda activate align-anything\n```\n\n### 2. 根据硬件平台安装依赖\n\n#### 方案 A：Nvidia GPU 环境\n\n建议先在 Conda 环境中安装 CUDA 并配置环境变量：\n\n```bash\n# 安装 CUDA (以 12.2.0 为例，可根据实际集群情况调整)\nconda install nvidia\u002Flabel\u002Fcuda-12.2.0::cuda\n\n# 设置 CUDA_HOME 环境变量\nexport CUDA_HOME=$CONDA_PREFIX\n\n# 如果 CUDA 安装在系统目录 (如 \u002Fusr\u002Flocal\u002Fcuda)，请使用以下命令：\n# export CUDA_HOME=\"\u002Fusr\u002Flocal\u002Fcuda\"\n```\n\n安装核心包及 vLLM 引擎（用于加速 PPO 训练）：\n\n```bash\n# 安装 align-anything\npip3 install -e .\n\n# 安装 vLLM (用于在 PPO 训练中加速序列生成)\npip3 install vllm==0.7.2\n```\n\n#### 方案 B：华为昇腾 (Ascend) NPU 环境\n\n昇腾用户可直接通过额外依赖项一键安装：\n\n```bash\npip3 install -e .[ascend]\n```\n\n> **注意**: 当前昇腾测试环境为 Python 3.10.6, CANN 8.0.rc3。如需更纯净的环境，可参考官方文档使用提供的 Docker 镜像进行部署。\n\n## 基本使用\n\nAlign-Anything 提供了丰富的脚本和教程（Cookbooks）来支持不同模态和算法的训练。以下是基于官方教程的最简使用流程示例。\n\n### 1. 文本到文本 (Text-to-Text) DPO 训练\n\n参考官方提供的 Jupyter Notebook 教程，您可以快速启动 DPO 训练。\n\n```bash\n# 查看中文教程示例 (需安装 jupyter 或在本地打开)\n# 路径：.\u002Fcookbooks\u002Fzh\u002Ftext_to_text_dpo.ipynb\n\n# 或者直接使用命令行运行脚本 (示例脚本路径，具体参数需根据数据集调整)\nbash scripts\u002Frun_dpo.sh \\\n    --model_name_or_path \u003Cyour_model_path> \\\n    --dataset_name \u003Cyour_dataset_path> \\\n    --output_dir .\u002Foutput\u002Fdpo_example\n```\n\n### 2. 多模态 SFT 训练 (文图到文本)\n\n针对 `Text + Image -> Text` 模态的 SFT 微调，可参考以下教程：\n\n```bash\n# 查看中文教程示例\n# 路径：.\u002Fcookbooks\u002Fzh\u002Ftext_image_to_text_sft.ipynb\n\n# 典型训练命令结构\npython -m align_anything.trainers.text_image_to_text.sft \\\n    --model_name_or_path \u003Cyour_multimodal_model> \\\n    --data_files \u003Cyour_image_text_dataset> \\\n    --output_dir .\u002Foutput\u002Fsft_multimodal\n```\n\n### 3. 使用多模态 CLI 进行推理\n\n框架内置了支持图像、音频和视频的多模态命令行工具：\n\n```bash\n# 示例：使用多模态 CLI 进行交互 (具体参数请参考 --help)\npython -m align_anything.cli.multimodal \\\n    --model_path \u003Cyour_finetuned_model> \\\n    --modality image \\\n    --input \"path\u002Fto\u002Fimage.jpg\" \\\n    --prompt \"请描述这张图片的内容\"\n```\n\n### 更多资源\n- **详细文档**: [https:\u002F\u002Falign-anything.readthedocs.io\u002F](https:\u002F\u002Falign-anything.readthedocs.io\u002F)\n- **数据集**: [PKU-Alignment\u002Falign-anything (HuggingFace)](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FPKU-Alignment\u002Falign-anything)\n- **进阶教程**: 请查阅项目根目录下的 `cookbooks\u002Fzh\u002F` 文件夹获取针对不同场景（如 O1 推理训练、模态扩展等）的详细中文笔记。","某医疗科技团队正致力于开发一款能同时理解医学影像（如 X 光片）和医生语音指令的多模态辅助诊断助手，旨在让模型输出符合专业伦理且逻辑严密的诊断建议。\n\n### 没有 align-anything 时\n- **多模态对齐困难**：团队需分别编写脚本处理图像和音频数据，难以将视觉特征与语音指令在统一框架下进行联合微调，导致模型常出现“看图说话”但忽略语音上下文的幻觉。\n- **算法适配成本高**：想要引入 DPO 或 PPO 等高级对齐算法来提升回答安全性时，需从零重构训练代码，缺乏现成的模块化支持，研发周期被大幅拉长。\n- **缺乏评估闭环**：缺少针对“图 + 音→文本”任务的专业评估工具，无法量化模型在多轮交互中是否真正遵循了医疗规范，只能依赖人工主观抽查。\n\n### 使用 align-anything 后\n- **一站式全模态训练**：利用其高度模块化框架，团队轻松加载预制的多模态数据集，直接对 Qwen-VL 等模型进行端到端微调，实现了影像与语音指令的精准语义对齐。\n- **灵活切换对齐策略**：通过内置的 SFT、DPO 及类 O1 推理训练脚本，快速迭代出既懂医学知识又严守安全边界的模型版本，无需重复造轮子。\n- **自动化多维评估**：集成 eval-anything 项目后，团队能大规模自动测试模型在复杂多轮问诊中的表现，确保护理建议的准确性与合规性。\n\nalign-anything 通过提供全模态、多算法的统一对齐基础设施，将原本数月才能打通的多模态医疗助手研发流程缩短至数周，显著提升了模型落地的可靠性与效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPKU-Alignment_align-anything_c1de7cb2.jpg","PKU-Alignment","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPKU-Alignment_a9b98610.png","Loves Sharing and Open-Source, Making AI Safer.",null,"yaodong.yang@outlook.com","https:\u002F\u002Fgithub.com\u002FPKU-Alignment",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",55.3,{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",41.3,{"name":91,"color":92,"percentage":93},"Shell","#89e051",3.4,{"name":95,"color":96,"percentage":97},"Dockerfile","#384d54",0,4640,508,"2026-04-06T03:52:28","Apache-2.0","Linux","NVIDIA GPU (测试环境为 H800，需安装 CUDA 12.2.0) 或 华为昇腾 NPU (Ascend-SNT9B, CANN 8.0.rc3)","未说明 (昇腾测试环境为 1536GB)",{"notes":106,"python":107,"dependencies":108},"支持 NVIDIA GPU 和华为昇腾 NPU 双平台。NVIDIA 环境下建议通过 conda 安装 CUDA 12.2.0 并设置环境变量；若使用 vLLM 加速 PPO 训练需单独安装 vllm==0.7.2。昇腾环境可通过 Docker 镜像或 pip install -e .[ascend] 安装，测试硬件为 8x Ascend-SNT9B。支持 Slurm 集群部署。","3.11 (NVIDIA), 3.10.6 (Ascend)",[109,110,111],"vllm==0.7.2","cuda (可选，推荐 12.2.0)","CANN (仅昇腾，8.0.rc3)",[46,15],[114,115,116,117,118,119],"large-language-models","multimodal","rlhf","chameleon","dpo","vision-language-model","2026-03-27T02:49:30.150509","2026-04-06T19:57:05.791106",[123,128,133,138,143,148],{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},19904,"Janus 模型的实施存在过时和错误问题，目前修复进展如何？","该问题已在 commit 3b26469 至 3a555b4 的一系列更新中解决。目前 Janus 的理解（understanding）和生成（generation）任务的 SFT 与 DPO 均已测试通过。用户可以同步 PR #197 并参考 ht lou\u002FAlign_Anything_Janus 仓库的更新后重试。","https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002F194",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},19905,"在哪里可以找到 Align-DS-V 的训练数据和训练策略相关信息？","您可以查看以下 Cookbook 笔记本获取详细信息：\n1. 中文版：https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fblob\u002Fmain\u002Fcookbooks\u002Fzh\u002Fmodality_scaling.ipynb\n2. 英文版：https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fblob\u002Fmain\u002Fcookbooks\u002Fen\u002Fmodality_scaling.ipynb","https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002F132",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},19906,"运行脚本时遇到 ImportError: cannot import name 'AnyBaseModel' 错误怎么办？","这是一个已知问题，通常由代码重构导致。维护者已确认将在后续的 Pull Request 中增强相关功能以解决导入问题。如果遇到此类错误，建议拉取最新代码或关注相关 PR 的合并状态。此外，对于 Reward Model，确保在 forward 函数中传递 output_hidden_states=True 以正确输出隐藏状态。","https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002F111",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},19907,"如何在华为昇腾 NPU 上进行训练和推理？需要修改源代码吗？","在华为昇腾 NPU 上构建后，通常可以直接使用 align-anything 的命令进行训练和推理，无需手动编写 `import torch_npu`。具体的设备实现细节可以参考源码中的 device_utils.py 文件：https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fblob\u002Fmain\u002Falign_anything\u002Futils\u002Fdevice_utils.py","https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002F199",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},19908,"是否支持 Janus-Pro 的生成部分微调？相关损失函数在哪里实现？","为了保持代码一致性，该损失函数未直接在此主仓库中实现，而是实现在 `htlou\u002FAlign_Anything_Janus` 仓库中（具体位于 `janus\u002Fmodels\u002Fmodeling_vlm.py` 的第 272-308 行）。请按照 `projects\u002Fjanus\u002FREADME.md` 中的安装流程安装该扩展包，该损失函数会在 align-anything 进行 Janus 训练时自动激活。","https:\u002F\u002Fgithub.com\u002FPKU-Alignment\u002Falign-anything\u002Fissues\u002F179",{"id":149,"question_zh":150,"answer_zh":151,"source_url":137},19909,"项目是否支持 vLLM 推理加速和 LoRA 微调？","目前 align-anything 对 vLLM 推理的支持耦合在评估模块中。未来计划包括：1. 提供解耦的 vLLM 推理脚本；2. 支持奖励模型的 vLLM 推理加速。关于 LoRA 功能，目前仅在纯文本模态的大模型上经过测试，团队正在努力将其集成和迁移到其他模态。",[]]