[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-GaryYufei--AlignLLMHumanSurvey":3,"tool-GaryYufei--AlignLLMHumanSurvey":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159267,2,"2026-04-17T11:29:14",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":79,"stars":81,"forks":82,"last_commit_at":83,"license":79,"difficulty_score":84,"env_os":85,"env_gpu":86,"env_ram":86,"env_deps":87,"category_tags":90,"github_topics":91,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":103,"updated_at":104,"faqs":105,"releases":106},8596,"GaryYufei\u002FAlignLLMHumanSurvey","AlignLLMHumanSurvey","Aligning Large Language Models with Human: A Survey","AlignLLMHumanSurvey 是一份专注于“大语言模型与人类对齐”领域的权威综述资源库。随着大模型在自然语言处理任务中表现卓越，它们仍面临误解指令、生成偏见内容或产生事实性幻觉等挑战。AlignLLMHumanSurvey 旨在系统梳理如何解决这些问题，帮助模型更好地契合人类的价值观与期望。\n\n该资源库详细涵盖了三大核心板块：对齐数据的收集策略（包括来自人类反馈和强模型生成的数据）、多样化的训练方法论（如在线\u002F离线人类对齐及参数高效训练），以及全面的模型评估体系（含设计原则、基准测试与评估范式）。此外，它还整理了相关的工具包与前沿研究论文，为从业者提供了清晰的技术路线图。\n\nAlignLLMHumanSurvey 特别适合 AI 研究人员、算法工程师以及对大模型安全性与伦理感兴趣的技术开发者使用。无论是希望深入理解对齐机制的学者，还是寻求落地最佳实践的工程师，都能从中获得宝贵的洞察。作为该领域的入门指南与进阶手册，它不仅总结了现有技术成果，更指出了未来充满潜力的研究方向，是探索大模型人性化演进不可或缺的参考坐标。","# Awesome-Align-LLM-Human\n\nA collection of papers and resources about aligning large language models (LLMs) with human.\n\nLarge Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks. Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect (hallucinated) information. Hence, aligning LLMs with human expectations has become an active area of interest within the research community. This survey presents a comprehensive overview of these alignment technologies, including the following aspects. (1) Data collection (2) Training methodologies (3) Model Evaluation. In conclusion, we collate and distill our findings, shedding light on several promising future research avenues in the field. This survey, therefore, serves as a valuable resource for anyone invested in understanding and advancing the alignment of LLMs to better suit human-oriented tasks and expectations.\n\nWe hope this repository can help researchers and practitioners to get a better understanding of this emerging field. If this repository is helpful for you, please help us by citing this paper:\n```bash\n@article{aligning_llm_human,\n    title={Aligning Large Language Models with Human: A Survey},\n    author={Yufei Wang and Wanjun Zhong and Liangyou Li and Fei Mi and Xingshan Zeng and Wenyong Huang and Lifeng Shang and Xin Jiang and Qun Liu},\n    journal={arXiv preprint arXiv:2307.12966},\n    year={2023}\n}\n```\n## News\n🔭 This project is under development. You can hit the **STAR** and **WATCH** to follow the updates.\n- 2023\u002F07\u002F31: Our survey paper is put into [[Podcast @ papersread.ai]](https:\u002F\u002Fpapersread.ai\u002Fe\u002Faligning-large-language-models-with-human-a-survey\u002F)\n- 2023\u002F07\u002F25: Our initial survey paper [Aligning Large Language Models with Human: A Survey](arxiv.org\u002Fabs\u002F2307.12966) becomes available.\n\n## Table of Contents\n- [News](#news)\n- [Awesome-Aligning-LLM-Human](#awesome-align-llm-human)\n    - [Related Surveys](#related-surveys)\n    - [Alignment Data](#alignment-data)\n        - [Data From Human](#data-from-human)\n        - [Data From Strong LLMs](#data-from-strong-llms)\n        - [Instructions Management](#instructions-management)\n    - [Alignment Training](#alignment-training)\n        - [Online Human Alignment](#online-human-alignment)\n        - [Offline Human Alignment](#offline-human-alignment)\n        - [Parameter-Efficient Training](#parameter-efficient-training)\n    - [Alignment Evaluation](#alignment-evaluation)\n        - [Evaluation Design Principles](#evaluation-design-principles) \n        - [Evaluation Benchmarks](#evaluation-benchmarks)\n        - [Evaluation Paradigms](#evaluation-paradigms)\n    - [Alignment Toolkits](#alignment-toolkits)\n\n## Related Surveys\n- A Survey of Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18223)\n- A Survey on Multimodal Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13549)\n- A Survey on Evaluation of Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03109)\n- Challenges and Applications of Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10169)\n- Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13712)\n- Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18703)\n- A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11391)\n- Unifying Large Language Models and Knowledge Graphs: A Roadmap [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08302)\n- Tool Learning with Foundation Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08354)\n- Eight Things to Know about Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00612)\n- Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15217)\n- A Stage Review of Instruction Tuning [[Blog]](https:\u002F\u002Fyaofu.notion.site\u002FJune-2023-A-Stage-Review-of-Instruction-Tuning-f59dbfc36e2d4e12a33443bd6b2012c2)\n\n## Alignment Data\n### Data From Human\n#### NLP Benchmarks\n- PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.acl-demo.9\u002F)\n- Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks [[paper]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.340\u002F)\n- The FLAN collection: Designing data and methods for effective instruction tuning [[paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13688)\n- The OIG Dataset [[Blog]](https:\u002F\u002Flaion.ai\u002Fblog\u002Foig-dataset\u002F)\n- ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07849)\n- Text Alignment Is An Efficient Unified Model for Massive NLP Tasks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02729)\n- OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.12017)\n- Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12659)\n\n#### Domain Knowledge\n- Learning A Foundation Language Model for Geoscience Knowledge Understanding and Utilization [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05064)\n- Lawyer LLaMA Technical Report [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15062)\n- HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06975)\n- PMC-LLaMA: Further Finetuning LLaMA on Medical Papers [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14454)\n- Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03042)\n  \n#### Hand-crafted Instructions \n- Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM [[Blog]](https:\u002F\u002Fwww.databricks.com\u002Fblog\u002F2023\u002F04\u002F12\u002Fdolly-first-open-commercially-viable-instruction-tuned-llm)\n- OpenAssistant Conversations -- Democratizing Large Language Model Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07327)\n- Chinese open instruction generalist: A preliminary release [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07987)\n- ShareGPT [[Blog]](https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-03-30-vicuna\u002F)\n- Let's Verify Step by Step [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.20050)\n- BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.04657)\n- The Importance of Human-Labeled Data in the Era of LLMs [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14910)\n\n#### Human Preference Data\n- Training language models to follow instructions with human feedback [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=TG8KACxEON)\n- Improving alignment of dialogue agents via targeted human judgements [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14375)\n- Fine-Tuning Language Models from Human Preference [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.08593)\n- Teaching language models to support answers with verified quotes [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11147)\n- WebGPT: Browser-assisted question-answering with human feedback [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.09332)\n\n### Data From Strong LLMs\n#### General Instructions\n##### Improving Input Quality\n- Self-Instruct: Aligning Language Models with Self-Generated Instructions [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.754\u002F)\n- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14402)\n- Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01196)\n- Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15895)\n- WizardLM: Empowering Large Language Models to Follow Complex Instructions [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12244)\n- Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor [[paper]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.806\u002F)\n- Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14327)\n- Exploring Format Consistency for Instruction Tuning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15504)\n\n##### Improving Output Quality\n- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=_VjQlMeSB_J)\n- Orca: Progressive Learning from Complex Explanation Traces of GPT-4 [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02707)\n- Lion: Adversarial Distillation of Closed-Source Large Language Model [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12870)\n- Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03047)\n- ExpertPrompting: Instructing Large Language Models to be Distinguished Experts [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14688)\n- Phoenix: Democratizing ChatGPT across Languages [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10453)\n- Improving Cross-Task Generalization with Step-by-Step Instructions [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04429)\n- The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14045)\n\n\n#### Reasoning Instructions\n##### General Reasoning\n- Specializing Smaller Language Models towards Multi-Step Reasoning [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=MXuLl38AEm)\n- Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.findings-acl.507\u002F)\n- Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18395)\n- PaD: Program-aided Distillation Specializes Large Models in Reasoning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13888)\n##### Code\n- Textbooks Are All You Need [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.11644)\n- WizardCoder: Empowering Code Large Language Models with Evol-Instruct [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08568)\n- Code Alpaca: An Instruction-following LLaMA model for code generation [[Github]](https:\u002F\u002Fgithub.com\u002Fsahil280114\u002Fcodealpaca)\n- CodeT5+: Open Code Large Language Models for Code Understanding and Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.07922)\n- PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936)\n##### Maths\n- MinT: Boosting Generalization in Mathematical Reasoning via Multi-View Fine-Tuning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07951)\n- Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14201)\n- Scaling Relationship on Learning Mathematical Reasoning with Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01825)\n  \n#### Conversational Instructions \n- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality [[Blog]](https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-03-30-vicuna\u002F)\n- Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01196)\n- Enhancing Chat Language Models by Scaling High-quality Instructional Conversations [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14233)\n- CAMEL: Communicative Agents for \"Mind\" Exploration of Large Scale Language Model Society [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17760)\n- Selfee: Iterative self-revising llm empowered by self-feedback generation [[Blog]](https:\u002F\u002Fkaistai.github.io\u002FSelFee\u002F)\n- An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01415)\n\n#### Multilingual Instructions\n- Phoenix: Democratizing ChatGPT across Languages [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10453)\n- BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10968)\n- Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15011)\n- Instruct-Align: Teaching Novel Languages with to LLMs through Alignment-based Cross-Lingual Instruction [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13627)\n\n\n### Instructions Management\n#### Instruction Implications\n- How far can camels go? exploring the state of instruction tuning on open resources [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04751)\n- Flacuna: Unleashing the problem solving power of vicuna using flan fine-tuning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02053)\n- Scaling data-constrained language models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16264)\n- Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07854)\n- The False Promise of Imitating Proprietary LLMs [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15717)\n- Fundamental Limitations of Alignment in Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11082)\n#### Instruction Quantity\n- Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03692)\n- LIMA: Less Is More for Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11206)\n- Instruction Mining: High-Quality Instruction Data Selection for Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06290)\n- AlpaGasus: Training A Better Alpaca with Fewer Data [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08701)\n- Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09246)\n\n## Alignment Training\n### Online Human Alignment\n- Training language models to follow instructions with human feedback [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=TG8KACxEON)\n- RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06767)\n- Constitutional AI: Harmlessness from AI Feedback [[Paper]](Constitutional AI: Harmlessness from AI Feedback)\n- RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12950)\n\n### Offline Human Alignment\n#### Rank-based Training\n- Direct Preference Optimization: Your Language Model is Secretly a Reward Model [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18290)\n- Preference Ranking Optimization for Human Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.17492)\n- RRHF: Rank Responses to Align Language Models with Human Feedback without tears [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05302)\n- PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936)\n- Calibrating Sequence likelihood Improves Conditional Language Generation [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=0qSOodKmJaN)\n- Making Large Language Models Better Reasoners with Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.02144.pdf)\n\n#### Language-based Training\n- OpenChat: Less is More for Open-source Models [[Github]](https:\u002F\u002Fgithub.com\u002Fimoneoi\u002Fopenchat)\n- Languages are rewards: Hindsight finetuning using human feedback [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02676)\n- Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=u6OfmaGIya1)\n- Training Socially Aligned Language Models in Simulated Human Society [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16960)\n- Selfee: Iterative self-revising llm empowered by self-feedback generation [[Blog]](https:\u002F\u002Fkaistai.github.io\u002FSelFee\u002F)\n- Fine-Grained Human Feedback Gives Better Rewards for Language Model Training [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01693)\n### Parameter-Efficient Training\n- LoRA: Low-Rank Adaptation of Large Language Models [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=nZeVKeeFYf9)\n- QLoRA: Efficient Finetuning of Quantized LLMs [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14314)\n- Prefix-Tuning: Optimizing Continuous Prompts for Generation [[Paper]](https:\u002F\u002Faclanthology.org\u002F2021.acl-long.353\u002F)\n- The Power of Scale for Parameter-Efficient Prompt Tuning [[Paper]](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.243\u002F)\n- Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=lq62uWRJjiY)\n- Parameter-Efficient Fine-Tuning Design Spaces [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=XSRSWxyJIC)\n- HINT: Hypernetwork Instruction Tuning for Efficient Zero- & Few-Shot Generalisation [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.631\u002F)\n\n### Model Architecture Design\n- Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14705)\n- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14402)\n\n## Alignment Evaluation\n### Evaluation Design Principles\n- Sparks of Artificial General Intelligence: Early experiments with GPT-4 [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12712)\n- Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10512)\n- Holistic Evaluation of Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.09110)\n\n### Evaluation Benchmarks\n#### Closed-set Benchmarks\n##### General Knowledge\n- Measuring Massive Multitask Language Understanding [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=d7KBjmI3GmQ)\n- CMMLU: Measuring massive multitask language understanding in Chinese [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09212)\n- C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08322)\n- KoLA: Carefully Benchmarking World Knowledge of Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09296)\n- M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10263)\n- AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06364)\n- Measuring Massive Multitask Chinese Understanding [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12986)\n- Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05783)\n- TABLET: Learning From Instructions For Tabular Data [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13188)\n- Can Language Models Understand Physical Concepts? [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14057)\n\n##### Reasoning\n- Training Verifiers to Solve Math Word Problems [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14168)\n- Measuring Massive Multitask Language Understanding [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=d7KBjmI3GmQ)\n- CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge [[Paper]](https:\u002F\u002Faclanthology.org\u002FN19-1421\u002F)\n- Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies [[Paper]](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00370\u002F100680\u002FDid-Aristotle-Use-a-Laptop-A-Question-Answering)\n- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=_VjQlMeSB_J)\n- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09261)\n- Program Synthesis with Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.07732)\n- DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11501)\n- Evaluating Large Language Models Trained on Code [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03374)\n- Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01210)\n- RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03091)\n- ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01861)\n- StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04556)\n  \n#### Open-set Benchmarks\n##### General Chat\n- Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality [[Blog]](https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-03-30-vicuna\u002F)\n- Self-Instruct: Aligning Language Models with Self-Generated Instructions [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.754\u002F)\n- OpenAssistant Conversations -- Democratizing Large Language Model Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07327)\n- FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10928)\n- Judging LLM-as-a-judge with MT-Bench and Chatbot Arena [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05685)\n- AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14387)\n##### Safety\n- Safety Assessment of Chinese Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10436)\n- CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09705)\n- Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08487)\n- TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.11507)\n##### Long Context\n- L-Eval: Instituting Standardized Evaluation for Long Context Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11088)\n\n### Evaluation Paradigms\n#### Human-based Evaluation\n- Self-Instruct: Aligning Language Models with Self-Generated Instructions [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.754\u002F)\n- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14402)\n- Training language models to follow instructions with human feedback [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=TG8KACxEON)\n- Judging LLM-as-a-judge with MT-Bench and Chatbot Arena [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05685)\n#### LLMs-based Evaluation\n##### LLMs for Evaluation\n- G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16634)\n- GPTScore: Evaluate as You Desire [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04166)\n- Exploring the Use of Large Language Models for Reference-Free Text Quality Evaluation: A Preliminary Empirical Study [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00723)\n- Can Large Language Models Be an Alternative to Human Evaluations? [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01937)\n- FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14251)\n- AlignScore: Evaluating Factual Consistency with A Unified Alignment Function [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.634\u002F)\n- Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13809)\n- Human-like Summarization Evaluation with ChatGPT [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.02554)\n- Large Language Models Are State-of-the-Art Evaluators of Code Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14317)\n- Benchmarking Foundation Models with Language-Model-as-an-Examiner [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04181)\n- LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models [[Paper]](https:\u002F\u002Faclanthology.org\u002F2023.nlp4convai-1.5\u002F)\n- LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14540)\n\n##### LLMs bias in Evaluation\n- Large Language Models are not Fair Evaluators [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17926)\n- Style Over Substance: Evaluation Biases for Large Language Models [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03025)\n- Judging LLM-as-a-judge with MT-Bench and Chatbot Arena [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05685)\n##### Evaluation-specific LLMs\n- PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05087)\n- Wider and Deeper LLM Networks are Fairer LLM Evaluators [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01862)\n- Shepherd: A Critic for Language Model Generation [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.04592)\n\n## Alignment Toolkits\n- Llama V1 & V2 [[Github]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama) [[Paper V1]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.13971) [[Paper V2]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09288)\n- Llama-X: Open Academic Research on Improving LLaMA to SOTA LLM [[Github]](https:\u002F\u002Fgithub.com\u002FAetherCortex\u002FLlama-X)\n- Llama2-Chinese [[Github]](https:\u002F\u002Fgithub.com\u002FFlagAlpha\u002FLlama2-Chinese)\n- Colossal-AI: Making large AI models cheaper, faster, and more accessible. [[Github]](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI)\n- Training and serving large-scale neural networks with auto parallelization. [[Github]](https:\u002F\u002Fgithub.com\u002Falpa-projects\u002Falpa)\n- FastChat [[Github]](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat)\n- LMFlow [[Github]](https:\u002F\u002Fgithub.com\u002FOptimalScale\u002FLMFlow)\n- LLaMA2-Accessory: An Open-source Toolkit for LLM Development [[Github]](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLLaMA2-Accessory)\n","# 令人惊叹的LLM与人类对齐\n\n关于大型语言模型（LLMs）与人类对齐的论文和资源合集。\n\n在海量文本语料上训练的大型语言模型（LLMs）已成为解决广泛自然语言处理（NLP）任务的领先方案。尽管这些模型表现出色，但它们仍存在一些局限性，例如误解人类指令、生成可能带有偏见的内容或事实错误（幻觉）信息。因此，如何使LLMs更好地符合人类期望，已成为研究界的一个热门方向。本综述全面概述了这些对齐技术，涵盖以下几个方面：(1) 数据收集；(2) 训练方法；(3) 模型评估。最后，我们总结并提炼了研究发现，指出了该领域未来几个有前景的研究方向。因此，本综述对于任何致力于理解并推进LLMs对齐以更好地适应人类任务和期望的人来说，都是一份宝贵的资源。\n\n我们希望这个仓库能够帮助研究人员和从业者更好地理解这一新兴领域。如果您觉得本仓库对您有所帮助，请通过引用以下论文来支持我们的工作：\n```bash\n@article{aligning_llm_human,\n    title={Aligning Large Language Models with Human: A Survey},\n    author={Yufei Wang and Wanjun Zhong and Liangyou Li and Fei Mi and Xingshan Zeng and Wenyong Huang and Lifeng Shang and Xin Jiang and Qun Liu},\n    journal={arXiv preprint arXiv:2307.12966},\n    year={2023}\n}\n```\n## 最新消息\n🔭 本项目仍在开发中。您可以点击 **STAR** 和 **WATCH** 来关注最新动态。\n- 2023年7月31日：我们的综述论文被收录于 [[Podcast @ papersread.ai]](https:\u002F\u002Fpapersread.ai\u002Fe\u002Faligning-large-language-models-with-human-a-survey\u002F)\n- 2023年7月25日：我们的初始综述论文 [Aligning Large Language Models with Human: A Survey](arxiv.org\u002Fabs\u002F2307.12966) 已发布。\n\n## 目录\n- [最新消息](#news)\n- [令人惊叹的LLM与人类对齐](#awesome-align-llm-human)\n    - [相关综述](#related-surveys)\n    - [对齐数据](#alignment-data)\n        - [来自人类的数据](#data-from-human)\n        - [来自强大LLMs的数据](#data-from-strong-llms)\n        - [指令管理](#instructions-management)\n    - [对齐训练](#alignment-training)\n        - [在线人类对齐](#online-human-alignment)\n        - [离线人类对齐](#offline-human-alignment)\n        - [参数高效训练](#parameter-efficient-training)\n    - [对齐评估](#alignment-evaluation)\n        - [评估设计原则](#evaluation-design-principles) \n        - [评估基准](#evaluation-benchmarks)\n        - [评估范式](#evaluation-paradigms)\n    - [对齐工具包](#alignment-toolkits)\n\n## 相关综述\n- 大型语言模型综述 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18223)\n- 多模态大型语言模型综述 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.13549)\n- 大型语言模型评估综述 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03109)\n- 大型语言模型的挑战与应用 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10169)\n- 实践中利用LLMs的力量：ChatGPT及更广泛的综述 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13712)\n- 领域专业化是使大型语言模型具有颠覆性的关键：综合综述 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18703)\n- 从验证与确认的角度看大型语言模型的安全性和可信度综述 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11391)\n- 统一大型语言模型与知识图谱：路线图 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08302)\n- 基础模型的工具学习 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08354)\n- 关于大型语言模型需要了解的八件事 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00612)\n- 人类反馈强化学习中的开放问题和根本局限性 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15217)\n- 指令微调阶段回顾 [[博客]](https:\u002F\u002Fyaofu.notion.site\u002FJune-2023-A-Stage-Review-of-Instruction-Tuning-f59dbfc36e2d4e12a33443bd6b2012c2)\n\n## 对齐数据\n\n### 人类生成的数据\n#### 自然语言处理基准\n- PromptSource：用于自然语言提示的集成开发环境与存储库 [[论文]](https:\u002F\u002Faclanthology.org\u002F2022.acl-demo.9\u002F)\n- Super-NaturalInstructions：通过1600多个NLP任务上的声明式指令实现泛化 [[论文]](https:\u002F\u002Faclanthology.org\u002F2022.emnlp-main.340\u002F)\n- FLAN数据集：为高效指令微调设计的数据与方法 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13688)\n- OIG数据集 [[博客]](https:\u002F\u002Flaion.ai\u002Fblog\u002Foig-dataset\u002F)\n- ChatPLUG：基于互联网增强型指令微调的开放域生成式对话系统，用于数字人应用 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07849)\n- 文本对齐是一种高效的统一模型，可应对海量NLP任务 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02729)\n- OPT-IML：从泛化的视角扩展语言模型指令元学习 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.12017)\n- Instruct-FinGPT：通过通用大型语言模型的指令微调进行金融情感分析 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.12659)\n\n#### 领域知识\n- 面向地球科学知识理解与利用的基础语言模型训练 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05064)\n- Lawyer LLaMA技术报告 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15062)\n- HuaTuo：基于中医药知识微调LLaMA模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06975)\n- PMC-LLaMA：在医学论文上进一步微调LLaMA [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14454)\n- 面向临床领域的参数高效微调LLaMA [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03042)\n\n#### 手工编写的指令\n- Free Dolly：推出全球首个真正开源的指令微调大型语言模型 [[博客]](https:\u002F\u002Fwww.databricks.com\u002Fblog\u002F2023\u002F04\u002F12\u002Fdolly-first-open-commercially-viable-instruction-tuned-llm)\n- OpenAssistant对话——推动大型语言模型对齐的民主化 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07327)\n- 中文开源指令通才模型：初步发布 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07987)\n- ShareGPT [[博客]](https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-03-30-vicuna\u002F)\n- 让我们逐步验证 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.20050)\n- BeaverTails：通过人类偏好数据集实现LLM安全对齐的改进 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.04657)\n- LLM时代下人工标注数据的重要性 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14910)\n\n#### 人类偏好数据\n- 基于人类反馈训练语言模型遵循指令 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=TG8KACxEON)\n- 通过有针对性的人类判断改进对话代理的对齐性 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14375)\n- 基于人类偏好对语言模型进行微调 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.08593)\n- 教导语言模型用经过验证的引用来支持答案 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11147)\n- WebGPT：结合浏览器辅助与人类反馈的问答系统 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.09332)\n\n### 强大大语言模型的数据\n#### 通用指南\n##### 提升输入质量\n- Self-Instruct：通过自生成指令对齐语言模型 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.754\u002F)\n- LaMini-LM：基于大规模指令蒸馏出的多样化模型集合 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14402)\n- Baize：一个开源聊天模型，采用参数高效的微调技术，基于自我对话数据训练 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01196)\n- 大型语言模型作为标注训练数据生成器：多样性与偏见的故事 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.15895)\n- WizardLM：使大型语言模型能够遵循复杂指令 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12244)\n- 不自然的指令：几乎无需人工劳动即可微调语言模型 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.806\u002F)\n- Dynosaur：面向指令微调数据编纂的动态增长范式 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14327)\n- 探索指令微调中的格式一致性 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.15504)\n\n##### 提升输出质量\n- 思维链提示在大型语言模型中激发推理能力 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=_VjQlMeSB_J)\n- Orca：从GPT-4的复杂解释轨迹中逐步学习 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02707)\n- Lion：闭源大型语言模型的对抗性蒸馏 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.12870)\n- 基于原则的语言模型自对齐：从零开始，在极少人工监督下实现 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03047)\n- ExpertPrompting：指导大型语言模型成为杰出专家 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14688)\n- Phoenix：让ChatGPT在多语言环境中普及 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10453)\n- 通过分步指令提升跨任务泛化能力 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04429)\n- CoT合集：通过思维链微调提升语言模型的零样本和少样本学习能力 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14045)\n\n\n#### 推理类指令\n##### 通用推理\n- 将小型语言模型专门化为多步推理 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=MXuLl38AEm)\n- 蒸馏出分步推理！用更少的训练数据和更小的模型规模超越大型语言模型 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.findings-acl.507\u002F)\n- 面向知识密集型任务的小型语言模型的知识增强型推理蒸馏 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18395)\n- PaD：程序辅助蒸馏使大型模型擅长推理 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13888)\n##### 代码\n- 教材就够了 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.11644)\n- WizardCoder：借助Evol-Instruct赋能代码大型语言模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.08568)\n- Code Alpaca：一个用于代码生成的指令跟随LLaMA模型 [[Github]](https:\u002F\u002Fgithub.com\u002Fsahil280114\u002Fcodealpaca)\n- CodeT5+：开放的代码大型语言模型，用于代码理解和生成 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.07922)\n- PanGu-Coder2：通过排名反馈提升大型语言模型的代码能力 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936)\n##### 数学\n- MinT：通过多视角微调提升数学推理的泛化能力 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.07951)\n- Goat：经过微调的LLaMA在算术任务上表现优于GPT-4 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14201)\n- 大型语言模型学习数学推理的规模效应 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01825)\n\n#### 对话类指令\n- Vicuna：一个开源聊天机器人，以90%*的ChatGPT质量令人印象深刻 [[博客]](https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-03-30-vicuna\u002F)\n- Baize：一个开源聊天模型，采用参数高效的微调技术，基于自我对话数据训练 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01196)\n- 通过扩展高质量指令对话来增强聊天语言模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14233)\n- CAMEL：用于探索大规模语言模型社会“心智”的沟通代理 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17760)\n- Selfee：由自我反馈生成驱动的迭代式自我修正语言模型 [[博客]](https:\u002F\u002Fkaistai.github.io\u002FSelFee\u002F)\n- 一种高效的数据生成流水线，用于为大型语言模型生成高质量的金融指令数据 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01415)\n\n#### 多语言指令\n- Phoenix：让ChatGPT在多语言环境中普及 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10453)\n- BayLing：通过交互式翻译弥合跨语言对齐与指令跟随之间的差距，适用于大型语言模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10968)\n- Bactrian-X：一种可复现的多语言指令跟随模型，采用低秩适应技术 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15011)\n- Instruct-Align：通过基于对齐的跨语言指令教导大型语言模型新语言 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13627)\n\n\n### 指令管理\n#### 指令的影响\n- 骆驼能走多远？探索开源资源上的指令微调现状 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04751)\n- Flacuna：利用flan微调释放vicuna的问题解决能力 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.02053)\n- 数据受限的语言模型的规模化 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16264)\n- 为中国语境下的更好指令跟随语言模型而努力：探究训练数据与评估的影响 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07854)\n- 模仿专有大型语言模型的虚假承诺 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.15717)\n- 大型语言模型对齐的根本局限性 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11082)\n#### 指令数量\n- 成为自我指令者：引入最小指令微调的早停标准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03692)\n- LIMA：少即是多的对齐之道 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11206)\n- 指令挖掘：为大型语言模型筛选高质量指令数据 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06290)\n- AlpaGasus：用更少的数据训练更好的Alpaca [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08701)\n- 或许只需0.5%的数据：低训练数据指令微调的初步探索 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.09246)\n\n## 对齐训练\n### 在线人类对齐\n- 通过人类反馈训练语言模型遵循指令 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=TG8KACxEON)\n- RAFT：基于奖励排序的微调，用于生成式基础模型的对齐 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06767)\n- 宪法AI：来自AI反馈的无害性 [[论文]](宪法AI：来自AI反馈的无害性)\n- RLCD：基于对比蒸馏的强化学习，用于语言模型对齐 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.12950)\n\n### 离线人类对齐\n#### 基于排序的训练\n- 直接偏好优化：你的语言模型其实是一个奖励模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18290)\n- 面向人类对齐的偏好排序优化 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.17492)\n- RRHF：通过排序响应实现无需人工标注的语言模型对齐 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05302)\n- PanGu-Coder2：利用排序反馈提升大型语言模型的代码能力 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.14936)\n- 校准序列似然性可改善条件语言生成 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=0qSOodKmJaN)\n- 通过对齐使大型语言模型成为更优秀的推理者 [[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2309.02144.pdf)\n\n#### 基于语言的训练\n- OpenChat：开源模型“少即是多” [[GitHub]](https:\u002F\u002Fgithub.com\u002Fimoneoi\u002Fopenchat)\n- 语言即奖励：利用人类反馈进行事后微调 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02676)\n- 三思而后行：从文本编辑中学习如何与人类价值观重新对齐 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=u6OfmaGIya1)\n- 在模拟人类社会中训练具有社会对齐特性的语言模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16960)\n- Selfee：由自我反馈生成驱动的迭代式自我修正大模型 [[博客]](https:\u002F\u002Fkaistai.github.io\u002FSelFee\u002F)\n- 细粒度的人类反馈为语言模型训练提供更好的奖励 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.01693)\n\n### 参数高效训练\n- LoRA：大型语言模型的低秩适应 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=nZeVKeeFYf9)\n- QLoRA：量化大模型的高效微调 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14314)\n- 前缀调优：优化连续提示以用于生成任务 [[论文]](https:\u002F\u002Faclanthology.org\u002F2021.acl-long.353\u002F)\n- 规模效应在参数高效提示调优中的作用 [[论文]](https:\u002F\u002Faclanthology.org\u002F2021.emnlp-main.243\u002F)\n- 参数高效微调的自适应预算分配 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=lq62uWRJjiY)\n- 参数高效微调的设计空间 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=XSRSWxyJIC)\n- HINT：用于高效零样本和少样本泛化的超网络指令调优 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.631\u002F)\n\n### 模型架构设计\n- 混合专家模型与指令微调：大型语言模型的制胜组合 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14705)\n- LaMini-LM：基于大规模指令蒸馏而成的多样化模型集合 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14402)\n\n## 对齐评估\n### 评估设计原则\n- 人工通用智能的火花：GPT-4 的早期实验 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12712)\n- 高效测量大语言模型的认知能力：一种自适应测试视角 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.10512)\n- 语言模型的整体性评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.09110)\n\n### 评估基准\n#### 封闭集基准\n##### 通用知识\n- 测量大规模多任务语言理解 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=d7KBjmI3GmQ)\n- CMMLU：测量中文中的大规模多任务语言理解 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09212)\n- C-Eval：面向基础模型的多层次多学科中文评估套件 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08322)\n- KoLA：精心评测大型语言模型的世界知识 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.09296)\n- M3KE：面向中文大型语言模型的大规模多层次多学科知识评估基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10263)\n- AGIEval：以人类为中心的基础模型评估基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.06364)\n- 测量大规模多任务中文理解 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.12986)\n- Xiezhi：一个持续更新的全方位领域知识评估基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05783)\n- TABLET：从指令中学习表格数据 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13188)\n- 语言模型能理解物理概念吗？[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14057)\n\n##### 推理\n- 训练验证者解决数学应用题 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14168)\n- 测量大规模多任务语言理解 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=d7KBjmI3GmQ)\n- CommonsenseQA：针对常识知识的问答挑战 [[论文]](https:\u002F\u002Faclanthology.org\u002FN19-1421\u002F)\n- 亚里士多德用过笔记本电脑吗？一个包含隐式推理策略的问答基准 [[论文]](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00370\u002F100680\u002FDid-Aristotle-Use-a-Laptop-A-Question-Answering)\n- 思维链提示可激发大型语言模型的推理能力 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=_VjQlMeSB_J)\n- 挑战BIG-Bench任务及思维链是否能解决它们 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09261)\n- 使用大型语言模型进行程序合成 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.07732)\n- DS-1000：一个自然且可靠的用于数据科学代码生成的基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.11501)\n- 评估经过代码训练的大型语言模型 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03374)\n- ChatGPT生成的代码真的正确吗？对大型语言模型代码生成能力的严格评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01210)\n- RepoBench：仓库级代码自动补全系统的基准测试 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.03091)\n- ClassEval：一个手工构建的用于评估大型语言模型类级别代码生成能力的基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01861)\n- StudentEval：学生编写的用于大型语言模型代码生成的提示语基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04556)\n\n#### 开放集基准\n##### 通用聊天\n- Vicuna：一款开源聊天机器人，以90%*的ChatGPT质量令人印象深刻 [[博客]](https:\u002F\u002Flmsys.org\u002Fblog\u002F2023-03-30-vicuna\u002F)\n- Self-Instruct：通过自动生成的指令对齐语言模型 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.754\u002F)\n- OpenAssistant对话——推动大型语言模型对齐民主化 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.07327)\n- FLASK：基于对齐技能集的细粒度语言模型评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.10928)\n- 使用MT-Bench和Chatbot Arena评判LLM作为裁判 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05685)\n- AlpacaFarm：一种从人类反馈中学习的方法的仿真框架 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14387)\n##### 安全性\n- 中文大型语言模型的安全性评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.10436)\n- CValues：从安全到责任的角度衡量中文大型语言模型的价值观 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09705)\n- 潜在越狱：评估大型语言模型文本安全性和输出鲁棒性的基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.08487)\n- TrustGPT：值得信赖且负责任的大型语言模型基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.11507)\n##### 长上下文\n- L-Eval：为长上下文语言模型建立标准化评估体系 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.11088)\n\n### 评估范式\n#### 基于人类的评估\n- Self-Instruct：通过自动生成指令对齐语言模型 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.754\u002F)\n- LaMini-LM：基于大规模指令蒸馏出的多样化模型集合 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14402)\n- 利用人类反馈训练语言模型遵循指令 [[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=TG8KACxEON)\n- 使用MT-Bench和Chatbot Arena评判LLM作为评判者 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05685)\n#### 基于LLM的评估\n##### LLM用于评估\n- G-Eval：使用GPT-4进行更贴近人类审美的NLG评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16634)\n- GPTScore：按需定制评估方式 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04166)\n- 探索大型语言模型在无参考文本质量评估中的应用：一项初步实证研究 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.00723)\n- 大型语言模型能否替代人工评估？[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.01937)\n- FActScore：针对长文本生成中事实准确性的细粒度原子级评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14251)\n- AlignScore：通过统一对齐函数评估事实一致性 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.acl-long.634\u002F)\n- 错误分析提示使大型语言模型能够实现类人翻译评估：以ChatGPT为例 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.13809)\n- 使用ChatGPT进行类人摘要评估 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.02554)\n- 大型语言模型是代码生成评估的最先进工具 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.14317)\n- 以语言模型为考官对基础模型进行基准测试 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.04181)\n- LLM-Eval：面向开放域对话的统一多维度自动评估框架 [[论文]](https:\u002F\u002Faclanthology.org\u002F2023.nlp4convai-1.5\u002F)\n- LLM作为事实推理者：来自现有基准及其他方面的见解 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14540)\n\n##### LLM在评估中的偏见\n- 大型语言模型并非公平的评估者 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17926)\n- 形式重于内容：大型语言模型的评估偏见 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.03025)\n- 使用MT-Bench和Chatbot Arena评判LLM作为评判者 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05685)\n##### 专门用于评估的LLM\n- PandaLM：用于LLM指令调优优化的自动评估基准 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.05087)\n- 更宽更深的LLM网络是更公平的LLM评估者 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.01862)\n- Shepherd：语言模型生成的批评者 [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.04592)\n\n## 对齐工具包\n- Llama V1 & V2 [[Github]](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fllama) [[论文V1]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.13971) [[论文V2]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.09288)\n- Llama-X：关于将LLaMA提升至SOTA水平的开源学术研究 [[Github]](https:\u002F\u002Fgithub.com\u002FAetherCortex\u002FLlama-X)\n- Llama2-Chinese [[Github]](https:\u002F\u002Fgithub.com\u002FFlagAlpha\u002FLlama2-Chinese)\n- Colossal-AI：让大型AI模型更便宜、更快、更易用。[[Github]](https:\u002F\u002Fgithub.com\u002Fhpcaitech\u002FColossalAI)\n- 通过自动并行化训练和部署大规模神经网络。[[Github]](https:\u002F\u002Fgithub.com\u002Falpa-projects\u002Falpa)\n- FastChat [[Github]](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat)\n- LMFlow [[Github]](https:\u002F\u002Fgithub.com\u002FOptimalScale\u002FLMFlow)\n- LLaMA2-Accessory：一个用于LLM开发的开源工具包 [[Github]](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLLaMA2-Accessory)","# AlignLLMHumanSurvey 快速上手指南\n\n**项目说明**：`AlignLLMHumanSurvey` (Awesome-Align-LLM-Human) 并非一个可直接安装运行的软件工具或代码库，而是一个**学术综述资源集合**。它整理了关于“大语言模型（LLM）与人类对齐”领域的论文、数据集、训练方法及评估基准。\n\n本指南旨在帮助开发者快速浏览和利用该仓库中的核心资源，构建自己的对齐研究或应用方案。\n\n## 1. 环境准备\n\n由于本项目主要是文献和资源列表，无需特定的运行时环境。但为了阅读论文和复现列表中提到的算法，建议准备以下基础环境：\n\n*   **操作系统**：Linux \u002F macOS \u002F Windows\n*   **前置依赖**：\n    *   Git（用于克隆仓库）\n    *   Python 3.8+（用于运行列表中引用的具体开源代码）\n    *   PyTorch \u002F TensorFlow（根据具体复现的模型需求安装）\n*   **网络环境**：部分论文链接托管于 arXiv 或 GitHub，国内访问可能较慢，建议配置科学上网环境或使用学术镜像。\n\n## 2. 安装步骤（获取资源）\n\n通过 Git 克隆仓库到本地，即可获取完整的资源列表和分类索引。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FWangRongsheng\u002FAlignLLMHumanSurvey.git\ncd AlignLLMHumanSurvey\n```\n\n*注：该项目持续更新中，建议定期执行 `git pull` 获取最新论文列表。*\n\n## 3. 基本使用\n\n本项目的“使用”方式是作为**导航地图**，根据你的需求查找对应的论文、数据集或代码实现。以下是三种典型的使用场景：\n\n### 场景一：查找对齐数据集 (Alignment Data)\n如果你需要数据来微调模型，请查看 `README.md` 中的 **[Alignment Data]** 章节。\n\n*   **人类反馈数据**：搜索关键词 `Human Preference Data`，可找到如 *OpenAssistant Conversations* 或 *BeaverTails* 等经典数据集论文。\n*   **指令微调数据**：搜索 `Data From Strong LLMs`，可找到 *Self-Instruct*, *WizardLM* 等利用强模型生成数据的方法。\n*   **领域数据**：在 `Domain Knowledge` 下可找到医疗 (HuaTuo)、法律 (Lawyer LLaMA) 等垂直领域的数据资源。\n\n**操作示例**：\n在本地打开 `README.md`，定位到 `### Data From Human`，点击对应论文的 `[Paper]` 或 `[Blog]` 链接获取数据下载方式。\n\n### 场景二：学习对齐训练方法 (Alignment Training)\n如果你需要了解如何训练模型以符合人类价值观，请参考 **[Alignment Training]** 章节。\n\n*   **在线\u002F离线对齐**：查阅 `Online Human Alignment` (如 RLHF) 和 `Offline Human Alignment` 相关论文。\n*   **高效微调**：查阅 `Parameter-Efficient Training` 获取 LoRA 等低资源消耗的对齐方案。\n\n**操作示例**：\n针对代码生成任务的对齐，可在 `Reasoning Instructions` -> `Code` 子栏目下找到 *WizardCoder* 或 *Code Alpaca* 的实现思路。\n\n### 场景三：评估模型对齐效果 (Alignment Evaluation)\n如果你需要测试模型的对齐程度，请参考 **[Alignment Evaluation]** 章节。\n\n*   **评估原则**：阅读 `Evaluation Design Principles` 了解设计评测集的核心逻辑。\n*   **评测基准**：在 `Evaluation Benchmarks` 中寻找适合的测试集（如安全性、事实性、指令遵循度）。\n\n### 引用本项目\n如果在你的研究或项目中使用了该资源列表，请在论文中引用：\n\n```bibtex\n@article{aligning_llm_human,\n    title={Aligning Large Language Models with Human: A Survey},\n    author={Yufei Wang and Wanjun Zhong and Liangyou Li and Fei Mi and Xingshan Zeng and Wenyong Huang and Lifeng Shang and Xin Jiang and Qun Liu},\n    journal={arXiv preprint arXiv:2307.12966},\n    year={2023}\n}\n```","某医疗科技公司的算法团队正致力于优化其垂直领域的医疗问答大模型，以解决模型在诊断建议中偶尔出现的幻觉和语气生硬问题。\n\n### 没有 AlignLLMHumanSurvey 时\n- **文献检索如大海捞针**：团队成员需手动在 arXiv 上筛选数百篇论文，难以区分哪些是真正针对“人类对齐”的数据收集或训练方法，效率极低。\n- **技术路线选择盲目**：缺乏系统性的方法论指导，团队在“在线人类反馈”与“离线数据微调”之间犹豫不决，容易选错适合医疗场景的对齐策略。\n- **评估标准缺失**：仅凭主观感觉判断模型是否变好，缺乏权威的评估基准（Benchmarks）和设计原则，无法量化模型在安全性和事实准确性上的提升。\n- **重复造轮子风险高**：因不了解现有的开源工具包（Toolkits）和最新进展，可能花费大量时间复现别人已经解决过的对齐难题。\n\n### 使用 AlignLLMHumanSurvey 后\n- **知识体系一键构建**：利用其分类清晰的目录，团队迅速锁定了关于“医疗指令数据管理”和“参数高效训练”的核心论文，将调研周期从数周缩短至几天。\n- **决策依据科学明确**：参考综述中对比的多种对齐训练范式，团队果断采用了更适合高敏感医疗场景的离线人类对齐方案，规避了在线交互的安全风险。\n- **评估维度全面量化**：直接复用文中推荐的评估范式和基准测试，建立了一套包含事实准确性、偏见检测及同理心维度的量化指标体系。\n- **站在巨人肩膀创新**：通过梳理出的未来研究方向，团队避开了成熟技术的重复开发，将资源集中攻克医疗领域特有的长尾对齐难题。\n\nAlignLLMHumanSurvey 不仅是一份文献清单，更是研发团队从盲目探索转向系统化落地人类对齐技术的导航图。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGaryYufei_AlignLLMHumanSurvey_f73c4493.png","GaryYufei","Yufei Wang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FGaryYufei_1f58fb97.jpg","MQ PhD Candidates","SYSU-UQ-MQ","澳大利亚","yufei.wang@students.mq.edu.au",null,"https:\u002F\u002Fgithub.com\u002FGaryYufei",742,30,"2026-03-22T06:45:02",1,"","未说明",{"notes":88,"python":86,"dependencies":89},"该项目是一个综述性资源列表（Awesome List），主要收集了关于大语言模型与人类对齐的论文、数据集和方法论，本身不是一个可执行的软件工具或代码库，因此 README 中未包含具体的运行环境、硬件配置或依赖库要求。",[],[14,35],[92,93,94,95,96,97,98,99,100,101,102],"chatgpt","gpt-4","large-language-models","llms","rlhf","supervised-finetuning","survey","awesome","chinese-llama","llama","llama2","2026-03-27T02:49:30.150509","2026-04-18T02:20:44.726074",[],[]]