[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-voidism--DoLa":3,"tool-voidism--DoLa":64},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":45,"last_commit_at":62,"category_tags":63,"status":16},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70612,"2026-04-05T11:12:22",[15,14,13,36],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":122,"forks":123,"last_commit_at":124,"license":125,"difficulty_score":45,"env_os":126,"env_gpu":127,"env_ram":126,"env_deps":128,"category_tags":135,"github_topics":136,"view_count":45,"oss_zip_url":125,"oss_zip_packed_at":125,"status":16,"created_at":140,"updated_at":141,"faqs":142,"releases":183},706,"voidism\u002FDoLa","DoLa","Official implementation for the paper \"DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models\"","DoLa 是一款专注于提升大语言模型事实准确性的开源解码策略。针对当前大模型普遍存在的“幻觉”问题——即生成偏离预训练事实的内容，DoLa 提供了一种无需外部知识检索或额外微调的解决方案。\n\n其核心原理在于利用 Transformer 层中事实知识局部化的特性。DoLa 通过对比模型深层与浅层投影到词汇空间后的 Logits 差异，动态调整下一个词的生成概率。这种“分层对比解码”方法能有效抑制错误信息的生成，同时保留模型的流畅度。实验数据显示，应用在 LLaMA 系列模型上时，DoLa 能在 TruthfulQA 等基准测试中带来 12-17% 的绝对性能提升。\n\nDoLa 基于 Hugging Face Transformers 实现，支持 MIT 协议。它非常适合大模型研究人员及开发者，尤其是那些希望在保持原有模型架构不变的前提下，低成本优化生成内容真实性的团队。通过简单的参数配置，即可在推理阶段直接应用，为构建更可靠的大模型应用提供了有力支持。","DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models\n===\n\n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-g.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n[![Arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2309.03883-B21A1B)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03883)\n[![Hugging Face Transformers](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97-Transformers-blue)](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)\n[![Tweet](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttp\u002Fshields.io.svg?style=social)](https:\u002F\u002Ftwitter.com\u002FYungSungChuang\u002Fstatus\u002F1701623359153316255)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvoidism\u002FDoLa?style=social)](https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fstargazers)\n\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fvoidism\u002FDoLa\u002Fblob\u002Fmaster\u002Fdola_evaluation.ipynb)\n\nCode for the ICLR 2024 paper \"DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models\"\n\nPaper: https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03883  \nAuthors: [Yung-Sung Chuang](https:\u002F\u002Fpeople.csail.mit.edu\u002Fyungsung\u002F) $^\\dagger$, [Yujia Xie](https:\u002F\u002Fsites.google.com\u002Fview\u002Fyujia) $^\\ddagger$, [Hongyin Luo](https:\u002F\u002Fluohongyin.github.io\u002F) $^\\dagger$, [Yoon Kim](https:\u002F\u002Fpeople.csail.mit.edu\u002Fyoonkim\u002F) $^\\dagger$, [James Glass](https:\u002F\u002Fpeople.csail.mit.edu\u002Fjrg\u002F) $^\\dagger$, [Pengcheng He](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=TS1RoxAAAAAJ&hl=en) $^\\ddagger$  \n$^\\dagger$ Massachusetts Institute of Technology, $^\\ddagger$ Microsoft\n\n## Overview\n\n![DoLa](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvoidism_DoLa_readme_2a108a88caeb.png)\n\nDespite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i.e., generating content that deviates from  facts seen during pretraining. We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs that does not require conditioning on retrieved external knowledge nor additional fine-tuning. Our approach obtains the next-token distribution by contrasting the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers. We find that this **D**ecoding by c**o**ntrasting **La**yers (DoLA) approach is able to better surface factual knowledge and reduce the generation of incorrect facts.  DoLA consistently improves the truthfulness across multiple choices tasks and open-ended generation tasks, for example improving performance of LLaMA family models on TruthfulQA by 12-17\\% absolute points, demonstrating its potential in making LLMs reliably generate truthful facts.\n\n## Setup\n\n```\npip install -e transformers-4.28.1\npip install datasets\npip install accelerate\npip install openai # -> only for truthfulqa and gpt4_eval\n```\n\n## Experiments\n\n### Arguments\n\n| Argument          | Example           | Description   |\n| ----------------- | ----------------- | ------------- |\n| `--model-name`    | `huggyllama\u002Fllama-7b` | Specifies the model you want to use, currently we only support LLaMA-v1. |\n| `--data-path`     | `\u002Fpath\u002Fto\u002Fdataset` | Path to the dataset file or folder. |\n| `--output-path`   | `output-path.json` | Where to store the output results. |\n| `--num-gpus`      | `1` | Number of GPUs to use, `1\u002F2\u002F4\u002F8` for `7B\u002F13B\u002F30B\u002F65B` model sizes respectively.  |\n| `--max_gpu_memory`| `27` | Maximum GPU memory size (in GiB) to allocate. Default: 27 (for 32G V100).  |\n\n### Understanding `--early-exit-layers`\n\nThe `--early-exit-layers` argument takes a string containing a sequence of layer numbers separated by commas, with no spaces in between. By specifying different number of layers, we make the model decode at different modes.\n\n\n| Number of Layers Specified  | Example (str)     | Description of Decoding Mode                                                                                     |\n| ---------------------------| ------------- | ----------------------------------------------------------------------------------------------- |\n| 1                          | `-1`      | **Naive decoding** from the final layer output.       |\n| 2                          | `16,32`   | **DoLa-static decoding** with the second specified layer (i.e. `32`) as the `mature_layer` and first specified layer (i.e. `16`) as `premature_layer`. |\n| >2                         | `0,2,4,6,8,10,12,14,32`    | **DoLa decoding** with the last specified layer (i.e. `32`) as the `mature_layer` and all the preceding layers (i.e. `0,2,4,6,8,10,12,14`) as `candidate_premature_layers`. |\n\n### FACTOR (Multiple Choices)\nPlease download the data file `wiki_factor.csv` from https:\u002F\u002Fgithub.com\u002FAI21Labs\u002Ffactor\n\n#### Baseline\n```bash\npython factor_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 1\npython factor_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 2\npython factor_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 4\npython factor_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython factor_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 1\npython factor_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 2\npython factor_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 4\npython factor_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 8\n```\n\n### TruthfulQA (Multiple Choices)\n\nThe `--data-path` should be a folder contains `TruthfulQA.csv`. If file not exists, it will be downloaded automatcially.\n\n#### Baseline\n```bash\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 16,18,20,22,24,26,28,30,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 20,22,24,26,28,30,32,34,36,38,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 40,42,44,46,48,50,52,54,56,58,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 60,62,64,66,68,70,72,74,76,78,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n### TruthfulQA\n\nTo evaluate the open-ended generation result of TruthfulQA, we need to finetune two GPT-3 curie models through OpenAI API:\n\n```\nopenai api fine_tunes.create -t finetune_truth.jsonl -m curie --n_epochs 5 --batch_size 21 --learning_rate_multiplier 0.1\nopenai api fine_tunes.create -t finetune_info.jsonl -m curie --n_epochs 5 --batch_size 21 --learning_rate_multiplier 0.1\n```\n\nAfter finetuning, we can obtain the finetuned model names by `openai api fine_tunes.list | grep fine_tuned_model`.\n\nCreate a config file `gpt3.config.json` like this:\n\n```json\n{\"gpt_info\": \"curie:ft-xxxxxxxxxx\",\n\"gpt_truth\": \"curie:ft-xxxxxxxxxx\",\n\"api_key\": \"xxxxxxx\"}\n```\n\nAdd the argument `--do-rating --gpt3-config gpt3.config.json` for GPT-3 evaluation.\n\n#### Baseline\n```bash\npython tfqa_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\n```\n\n#### DoLa\n```bash\npython tfqa_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 16,18,20,22,24,26,28,30,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 20,22,24,26,28,30,32,34,36,38,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 40,42,44,46,48,50,52,54,56,58,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 60,62,64,66,68,70,72,74,76,78,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\n```\n\n### GSM8K\n\nWe use a random sampled subset of GSM8K training set to serve as the validation set of both StrategyQA and GSM8K. The file can be downloaded [here](https:\u002F\u002Fwww.dropbox.com\u002Fscl\u002Ffi\u002Fo8zde51x0erejbp8bo8d5\u002Fgsm8k-train-sub.jsonl?rlkey=yu90sjt58fk1cell3ey4v8xuu&dl=0).\n\nThe `--data-path` argument should be either:\n- A folder that contains `gsm8k_test.jsonl`, otherwise the files will be downloaded automatically to the folder you specified. \n- (Only for GSM8K) The path to `gsm8k-train-sub.jsonl` which can be downloaded by the link above.\n\n#### Baseline\n```bash\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n### StrategyQA\n\nThe `--data-path` argument should be a folder that contains `strategyqa_train.json`, otherwise the files will be downloaded automatically to the folder you specified.\n\n#### Baseline\n```bash\npython strqa_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython strqa_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython strqa_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython strqa_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython strqa_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython strqa_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython strqa_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython strqa_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n### GPT-4 Evaluation (Vicuna QA Benchmark)\n\nIn GPT-4 evaluation, we need the question file from [FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat). In the following commands, we assume the path to your FastChat repo is `$fastchat`.\n\n#### Baseline\n```bash\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-7b --model-id llama-7b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 1\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-13b --model-id llama-13b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 2\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-30b --model-id llama-30b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 4\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-65b --model-id llama-65b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 8\n```\n\n#### DoLa\n```bash\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --model-id llama-7b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 1\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --model-id llama-13b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 2\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --model-id llama-30b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 4\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --model-id llama-65b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 8\n```\n\nAfter running the above commands to generate the model responses, we need OpenAI API key to pairwise compare the responses from different decoding results.\n\n```bash\npython $fastchat\u002Feval\u002Feval_gpt_review.py -q $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl -a output-answer-1.jsonl output-answer-2.jsonl -p $fastchat\u002Feval\u002Ftable\u002Fprompt.jsonl -r $fastchat\u002Feval\u002Ftable\u002Freviewer.jsonl -o output-review-path.jsonl -k openai_api_key\n```\n\nFor more details of GPT-4 evaluation, please check [vicuna-blog-eval](https:\u002F\u002Fgithub.com\u002Flm-sys\u002Fvicuna-blog-eval\u002Ftree\u002Fmain\u002Feval).\n\n## Reference Repositories\n- FastChat: https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\n- ContrastiveDecoding: https:\u002F\u002Fgithub.com\u002FXiangLi1999\u002FContrastiveDecoding\n- TruthfulQA: https:\u002F\u002Fgithub.com\u002Fsylinrl\u002FTruthfulQA\n- zero_shot_cot: https:\u002F\u002Fgithub.com\u002Fkojima-takeshi188\u002Fzero_shot_cot\n- FederatedScope: https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\n\n## Citation\n\n[![DOI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.48550\u002FarXiv.2309.03883-green?color=FF8000?color=009922)](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2309.03883)\n\nPlease cite our paper if it's helpful to your work!\n```\n@inproceedings{chuang2024dola,\n  title={DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models},\n  author={Yung-Sung Chuang and Yujia Xie and Hongyin Luo and Yoon Kim and James R. Glass and Pengcheng He},\n  booktitle={The Twelfth International Conference on Learning Representations},\n  year={2024},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=Th6NyL07na}\n}\n```\n","DoLa：通过对比层解码提升大型语言模型的事实性\n===\n\n[![许可证：MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-g.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2309.03883-B21A1B)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03883)\n[![🤗 Transformers](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97-Transformers-blue)](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)\n[![推文](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Furl\u002Fhttp\u002Fshields.io.svg?style=social)](https:\u002F\u002Ftwitter.com\u002FYungSungChuang\u002Fstatus\u002F1701623359153316255)\n[![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fvoidism\u002FDoLa?style=social)](https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fstargazers)\n\n[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fvoidism\u002FDoLa\u002Fblob\u002Fmaster\u002Fdola_evaluation.ipynb)\n\nICLR 2024 论文 \"DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models\" 的代码\n\n论文：https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.03883  \n作者：[Yung-Sung Chuang](https:\u002F\u002Fpeople.csail.mit.edu\u002Fyungsung\u002F) $^\\dagger$, [Yujia Xie](https:\u002F\u002Fsites.google.com\u002Fview\u002Fyujia) $^\\ddagger$, [Hongyin Luo](https:\u002F\u002Fluohongyin.github.io\u002F) $^\\dagger$, [Yoon Kim](https:\u002F\u002Fpeople.csail.mit.edu\u002Fyoonkim\u002F) $^\\dagger$, [James Glass](https:\u002F\u002Fpeople.csail.mit.edu\u002Fjrg\u002F) $^\\dagger$, [Pengcheng He](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=TS1RoxAAAAAJ&hl=en) $^\\ddagger$  \n$^\\dagger$ 麻省理工学院，$^\\ddagger$ 微软\n\n## 概述\n\n![DoLa](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvoidism_DoLa_readme_2a108a88caeb.png)\n\n尽管大型语言模型（Large Language Models, LLMs）具有令人印象深刻的功能，但它们容易产生幻觉（hallucinations），即生成与预训练期间看到的事实不符的内容。我们提出了一种简单的解码策略，用于减少预训练 LLM 的幻觉，该策略不需要依赖检索到的外部知识进行条件控制，也不需要额外的微调。我们的方法通过对比将较深层和较浅层投影到词汇空间后获得的 logits（未归一化的对数概率）差异来获取下一个 token 的分布，利用了 LLM 中的事实性知识通常被证明局限于特定 Transformer 层这一事实。我们发现这种通过**对**比**层**进行**解****码**（DoLA）的方法能够更好地呈现事实性知识并减少错误事实的生成。DoLA 在多项选择题任务和开放式生成任务上持续提高了真实性，例如将 LLaMA 系列模型在 TruthfulQA 上的性能提升了 12-17 个绝对百分点，展示了其在使 LLM 可靠地生成真实事实方面的潜力。\n\n## 环境配置\n\n```\npip install -e transformers-4.28.1\npip install datasets\npip install accelerate\npip install openai # -> only for truthfulqa and gpt4_eval\n```\n\n## 实验\n\n### 参数\n\n| 参数              | 示例                  | 描述                                      |\n| ----------------- | --------------------- | ----------------------------------------- |\n| `--model-name`    | `huggyllama\u002Fllama-7b` | 指定要使用的模型，目前我们仅支持 LLaMA-v1。 |\n| `--data-path`     | `\u002Fpath\u002Fto\u002Fdataset`    | 数据集文件或文件夹的路径。                 |\n| `--output-path`   | `output-path.json`    | 存储输出结果的位置。                       |\n| `--num-gpus`      | `1`                   | 使用的 GPU 数量，分别对应 `7B\u002F13B\u002F30B\u002F65B` 模型大小的 `1\u002F2\u002F4\u002F8`。 |\n| `--max_gpu_memory`| `27`                  | 分配的最大 GPU 内存大小（单位 GiB）。默认值为 27（适用于 32G V100）。 |\n\n### 理解 `--early-exit-layers`\n\n`--early-exit-layers` 参数接受一个包含由逗号分隔的层号序列的字符串，中间没有空格。通过指定不同数量的层，我们可以让模型以不同的模式进行解码。\n\n\n| 指定层数 | 示例 (字符串)       | 解码模式描述                                                                                     |\n| ---------------------------| ------------- | ----------------------------------------------------------------------------------------------- |\n| 1                          | `-1`      | 来自**最终层输出的朴素解码**。       |\n| 2                          | `16,32`   | **DoLa 静态解码**，将第二个指定的层（即 `32`）作为 `mature_layer`（成熟层），第一个指定的层（即 `16`）作为 `premature_layer`（早期层）。 |\n| >2                         | `0,2,4,6,8,10,12,14,32`    | **DoLa 解码**，将最后一个指定的层（即 `32`）作为 `mature_layer`（成熟层），所有前面的层（即 `0,2,4,6,8,10,12,14`）作为 `candidate_premature_layers`（候选早期层）。 |\n\n### FACTOR（多项选择题）\n请从 https:\u002F\u002Fgithub.com\u002FAI21Labs\u002Ffactor 下载数据文件 `wiki_factor.csv`\n\n#### 基线\n```bash\npython factor_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 1\npython factor_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 2\npython factor_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 4\npython factor_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython factor_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 1\npython factor_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 2\npython factor_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 4\npython factor_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 8\n```\n\n### TruthfulQA（多项选择）\n\n`--data-path` 参数应指向一个包含 `TruthfulQA.csv` 的文件夹。如果文件不存在，系统将自动下载。\n\n#### 基线模型\n```bash\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 16,18,20,22,24,26,28,30,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 20,22,24,26,28,30,32,34,36,38,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 40,42,44,46,48,50,52,54,56,58,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython tfqa_mc_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 60,62,64,66,68,70,72,74,76,78,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n### TruthfulQA\n\n为了评估 TruthfulQA 的开放式生成结果，我们需要通过 OpenAI API 微调两个 GPT-3 curie 模型：\n\n```\nopenai api fine_tunes.create -t finetune_truth.jsonl -m curie --n_epochs 5 --batch_size 21 --learning_rate_multiplier 0.1\nopenai api fine_tunes.create -t finetune_info.jsonl -m curie --n_epochs 5 --batch_size 21 --learning_rate_multiplier 0.1\n```\n\n微调完成后，可以通过运行 `openai api fine_tunes.list | grep fine_tuned_model` 来获取微调后的模型名称。\n\n创建一个名为 `gpt3.config.json` 的配置文件，内容如下：\n\n```json\n{\"gpt_info\": \"curie:ft-xxxxxxxxxx\",\n\"gpt_truth\": \"curie:ft-xxxxxxxxxx\",\n\"api_key\": \"xxxxxxx\"}\n```\n\n为 GPT-3 评估添加参数 `--do-rating --gpt3-config gpt3.config.json`。\n\n#### 基线模型\n```bash\npython tfqa_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\n```\n\n#### DoLa\n```bash\npython tfqa_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 16,18,20,22,24,26,28,30,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 20,22,24,26,28,30,32,34,36,38,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 40,42,44,46,48,50,52,54,56,58,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\npython tfqa_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 60,62,64,66,68,70,72,74,76,78,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8 --do-rating --gpt3-config \u002Fpath\u002Fto\u002Fgpt3.config.json\n```\n\n### GSM8K\n\n我们使用 GSM8K 训练集的一个随机采样子集作为 StrategyQA 和 GSM8K 的验证集。该文件可以在 [此处](https:\u002F\u002Fwww.dropbox.com\u002Fscl\u002Ffi\u002Fo8zde51x0erejbp8bo8d5\u002Fgsm8k-train-sub.jsonl?rlkey=yu90sjt58fk1cell3ey4v8xuu&dl=0) 下载。\n\n`--data-path` 参数应为以下之一：\n- 一个包含 `gsm8k_test.jsonl` 的文件夹，否则文件将自动下载到指定文件夹中。 \n- （仅限 GSM8K）上述链接可下载的 `gsm8k-train-sub.jsonl` 的路径。\n\n#### 基线模型\n```bash\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython gsm8k_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n### StrategyQA\n\n`--data-path` 参数应为一个包含 `strategyqa_train.json` 的文件夹，否则文件将自动下载到您指定的文件夹中。\n\n#### Baseline\n```bash\npython strqa_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython strqa_eval.py --model-name huggyllama\u002Fllama-13b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython strqa_eval.py --model-name huggyllama\u002Fllama-30b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython strqa_eval.py --model-name huggyllama\u002Fllama-65b --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n#### DoLa\n```bash\npython strqa_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 1\npython strqa_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 2\npython strqa_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 4\npython strqa_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --data-path \u002Fpath\u002Fto\u002Fdata\u002Ffolder --output-path output-path.json --num-gpus 8\n```\n\n### GPT-4 评估（Vicuna QA 基准）\n\n在 GPT-4 评估中，我们需要来自 [FastChat](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat) 的问题文件。在以下命令中，我们假设您的 FastChat 仓库路径为 `$fastchat`。\n\n#### Baseline\n```bash\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-7b --model-id llama-7b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 1\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-13b --model-id llama-13b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 2\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-30b --model-id llama-30b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 4\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-65b --model-id llama-65b-baseline --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 8\n```\n\n#### DoLa\n```bash\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --model-id llama-7b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 1\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-13b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,40 --model-id llama-13b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 2\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-30b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,60 --model-id llama-30b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 4\npython gpt4_judge_eval.py --model-name huggyllama\u002Fllama-65b --early-exit-layers 0,2,4,6,8,10,12,14,16,18,80 --model-id llama-65b-dola --question-file $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl --answer-file output-answer.jsonl --num-gpus 8\n```\n\n运行上述命令生成模型响应后，我们需要 OpenAI API 密钥来对不同解码结果生成的响应进行成对比较。\n\n```bash\npython $fastchat\u002Feval\u002Feval_gpt_review.py -q $fastchat\u002Feval\u002Ftable\u002Fquestion.jsonl -a output-answer-1.jsonl output-answer-2.jsonl -p $fastchat\u002Feval\u002Ftable\u002Fprompt.jsonl -r $fastchat\u002Feval\u002Ftable\u002Freviewer.jsonl -o output-review-path.jsonl -k openai_api_key\n```\n\n有关 GPT-4 评估的更多详细信息，请查看 [vicuna-blog-eval](https:\u002F\u002Fgithub.com\u002Flm-sys\u002Fvicuna-blog-eval\u002Ftree\u002Fmain\u002Feval)。\n\n## 参考仓库\n- FastChat: https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\n- ContrastiveDecoding: https:\u002F\u002Fgithub.com\u002FXiangLi1999\u002FContrastiveDecoding\n- TruthfulQA: https:\u002F\u002Fgithub.com\u002Fsylinrl\u002FTruthfulQA\n- zero_shot_cot: https:\u002F\u002Fgithub.com\u002Fkojima-takeshi188\u002Fzero_shot_cot\n- FederatedScope: https:\u002F\u002Fgithub.com\u002Falibaba\u002FFederatedScope\n\n## 引用\n\n[![DOI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDOI-10.48550\u002FarXiv.2309.03883-green?color=FF8000?color=009922)](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2309.03883)\n\n如果对我们的工作有帮助，请引用我们的论文！\n```\n@inproceedings{chuang2024dola,\n  title={DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models},\n  author={Yung-Sung Chuang and Yujia Xie and Hongyin Luo and Yoon Kim and James R. Glass and Pengcheng He},\n  booktitle={The Twelfth International Conference on Learning Representations},\n  year={2024},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=Th6NyL07na}\n}\n```","# DoLa 快速上手指南\n\n**DoLa (Decoding by Contrasting Layers)** 是一种用于提升大语言模型（LLM）事实性的解码策略。它无需外部知识检索或额外微调，通过对比模型早期层与晚期层的 Logits 差异，有效减少模型幻觉，提高生成内容的真实性。\n\n## 环境准备\n\n- **硬件要求**: 建议使用 NVIDIA GPU。根据模型大小配置显存（例如 7B 模型建议至少 27GiB 显存）。\n- **软件环境**: Python 3.x, CUDA 环境。\n- **支持模型**: 目前仅支持 **LLaMA-v1** 系列模型（如 `huggyllama\u002Fllama-7b`）。\n\n## 安装步骤\n\n1. 克隆项目代码：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa.git\ncd DoLa\n```\n\n2. 安装依赖包：\n```bash\npip install -e transformers-4.28.1\npip install datasets\npip install accelerate\npip install openai # -> 仅用于 truthfulqa 和 gpt4_eval\n```\n\n## 基本使用\n\n### 核心参数说明\n\n运行评估脚本时，主要关注以下参数：\n\n| 参数 | 说明 |\n| :--- | :--- |\n| `--model-name` | 指定使用的模型名称，例如 `huggyllama\u002Fllama-7b` |\n| `--data-path` | 数据集文件路径或文件夹路径 |\n| `--output-path` | 结果输出路径 |\n| `--num-gpus` | 使用的 GPU 数量 (7B\u002F13B\u002F30B\u002F65B 分别对应 1\u002F2\u002F4\u002F8) |\n| `--early-exit-layers` | **关键参数**。控制 DoLa 模式，用逗号分隔的层号字符串。\u003Cbr>例：`-1` (基准解码), `16,32` (静态 DoLa), `0,2,...,32` (完整 DoLa) |\n\n### 使用示例\n\n以下以 **FACTOR** 任务为例，展示如何运行基准测试和开启 DoLa 功能。请确保已下载对应的数据文件（如 `wiki_factor.csv`）。\n\n#### 1. 基准测试 (Baseline)\n不使用 DoLa，直接从最后一层进行解码：\n```bash\npython factor_eval.py --model-name huggyllama\u002Fllama-7b --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 1\n```\n\n#### 2. 启用 DoLa\n添加 `--early-exit-layers` 参数以激活层对比机制：\n```bash\npython factor_eval.py --model-name huggyllama\u002Fllama-7b --early-exit-layers 0,2,4,6,8,10,12,14,32 --data-path \u002Fpath\u002Fto\u002Fwiki_factor.csv --output-path output-path.json --num-gpus 1\n```\n\n其他任务（如 TruthfulQA, GSM8K）使用方法类似，只需更换对应的评估脚本（如 `tfqa_mc_eval.py`, `gsm8k_eval.py`）并调整 `--early-exit-layers` 参数即可。","某互联网医疗团队基于开源 LLaMA-7B 模型开发智能问诊系统，核心需求是确保生成的医学建议严格符合事实，避免误导患者造成严重后果，因此对准确性要求极高。\n\n### 没有 DoLa 时\n- 模型频繁出现幻觉，编造不存在的药物配伍禁忌或具体剂量建议，直接威胁患者安全。\n- 对罕见病症的描述往往混淆概念，缺乏权威医学文献依据支撑，显得不够专业。\n- 生成内容存在潜在安全风险，必须依赖昂贵且低效的人工审核流程才能发布。\n- 用户因多次收到错误建议而流失，导致产品口碑严重受损且难以挽回信任。\n\n### 使用 DoLa 后\n- DoLa 通过对比早期与晚期层的 Logits 差异，有效抑制了模型的虚构倾向，无需额外训练。\n- 输出的药物信息和治疗方案更贴近真实医学文献数据，逻辑链条更严密且可追溯。\n- 事实性错误率大幅降低，显著提升了回答的可信度和专业度表现，用户满意度提高。\n- 减少了对人工复核的依赖，加快了产品上线迭代速度并大幅降低运营成本。\n\nDoLa 以零微调成本有效解决了大模型在垂直领域的事实性幻觉问题，让生成内容更加安全可靠，为高敏感场景落地提供了可行方案。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvoidism_DoLa_cc413a62.png","voidism","Yung-Sung Chuang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvoidism_3b7a960a.jpg","Research Scientist at OpenAI. First author of MetaCLIP 2, DoLa, SelfCite, DiffCSE","CSAIL, MIT","San Francisco, CA","b05901033@ntu.edu.tw","YungSungChuang","https:\u002F\u002Fpeople.csail.mit.edu\u002Fyungsung\u002F","https:\u002F\u002Fgithub.com\u002Fvoidism",[86,90,94,98,102,106,110,113,116,119],{"name":87,"color":88,"percentage":89},"Python","#3572A5",91.2,{"name":91,"color":92,"percentage":93},"MDX","#fcb32c",7.8,{"name":95,"color":96,"percentage":97},"Jupyter Notebook","#DA5B0B",0.4,{"name":99,"color":100,"percentage":101},"Cuda","#3A4E3A",0.3,{"name":103,"color":104,"percentage":105},"Shell","#89e051",0.1,{"name":107,"color":108,"percentage":109},"Dockerfile","#384d54",0,{"name":111,"color":112,"percentage":109},"C++","#f34b7d",{"name":114,"color":115,"percentage":109},"C","#555555",{"name":117,"color":118,"percentage":109},"Cython","#fedf5b",{"name":120,"color":121,"percentage":109},"Makefile","#427819",546,67,"2026-04-02T11:51:02",null,"未说明","需要 NVIDIA GPU，建议 32G 显存（V100），默认分配 27GiB",{"notes":129,"python":126,"dependencies":130},"仅支持 LLaMA-v1 模型；TruthfulQA 开放生成评估需额外微调 GPT-3 Curie 模型并配置 OpenAI API；部分数据集需手动下载或自动下载；支持多卡并行推理",[131,132,133,134],"transformers==4.28.1","datasets","accelerate","openai",[15],[137,138,139],"factuality","hallucinations","large-language-models","2026-03-27T02:49:30.150509","2026-04-06T05:16:56.608289",[143,148,153,158,163,168,173,178],{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},2962,"如何支持 LLaMA-2 或 Mistral 等新模型？是否需要调整参数？","不同模型在各层存储的知识分布不同，建议根据新模型调整选定的层范围。例如，TruthfulQA 任务（短句子、密集事实知识）通常需要对比更高层部分以获得提升；而大多数需要长回答推理的任务（如 GSM8K、StrategyQA），对比较低层部分会更有帮助。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F4",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},2963,"JS 散度计算时，批次（batch）内的所有 token 是否使用相同的提前退出层？","第 197 行代码位于解码步骤的循环内，因此每个 token 在不同解码步骤会有自己选择的层。`.mean(-1)` 操作使得同一时间步的所有样本共享同一个提前退出层。但在实验中通常使用 batch size = 1，所以这不是问题。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F13",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},2964,"TruthfulQA-MC 评估中为什么设置 `post_softmax=False`？这与论文公式冲突吗？","这是正确的观察。设置 `post_softmax=False` 能略微提高 TruthfulQA-MC 的性能分数，我们在论文中报告了该改进分数但遗漏了描述细节。严格来说应设为 `True` 以符合数学公式将 logits 转为概率，但 `False` 的表现更好可能是由于 TruthfulQA-MC 的数据分布特性。我们将更新 arXiv 版本说明这一点。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F7",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},2965,"图 2 中显示的 JS 散度数值为何大于 1？这不符合 [0,1] 的范围吗？","图 2 中的数值实际上乘以了 $10^5$ 进行缩放，因此实际值都在 [0, 1] 之间。这一细节在第一版 arXiv 论文中被遗漏，但在 OpenReview 版本中已在图注中补充说明。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F10",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},2966,"对比方法的核心思想是浅层输出分布是真实分布的逆吗？","不是。我们并未声称浅层分布是真实分布的逆。如果简单取逆也无法得到真实分布。我们的观点是：高层分布与浅层分布之间的差异近似于真实分布。如果一个词在浅层输出概率高，但在最终层输出概率相对较低，说明高层认为该词不是正确答案，因此概率降低。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F6",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},2967,"代码中 `get_relative_top_filter` 函数里的 `min_thresh` 有什么作用？","`min_thresh` 的作用是防止 `probs_thresh` 过大从而剔除过多的 token。例如设置 `min_tokens_to_keep=3`，则 `probs_thresh` 不会高于第 3 大 token 的概率，确保前 3 个 token 始终被保留。此实现借鉴自 ContrastiveDecoding。在我们的实验中未使用此参数（默认设为 1），相当于允许只保留 top-1 token。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F3",{"id":174,"question_zh":175,"answer_zh":176,"source_url":177},2968,"GPT-Judge 微调所需的 `finetune_truth.jsonl` 等数据在哪里获取？","可以在 TruthfulQA 官方仓库的数据目录中找到：https:\u002F\u002Fgithub.com\u002Fsylinrl\u002FTruthfulQA\u002Ftree\u002Fmain\u002Fdata。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F17",{"id":179,"question_zh":180,"answer_zh":181,"source_url":182},2969,"如何获取大模型各中间层的 token 预测概率以绘制图表？","没有专用工具，需要在 transformers 包中插入代码（如 `modeling_llama.py` 和 `generation\u002Futils.py`）来获取解码步骤中的预测结果。虽然代码会显得杂乱，但可以工作。最后使用 matplotlib 制作表格即可。","https:\u002F\u002Fgithub.com\u002Fvoidism\u002FDoLa\u002Fissues\u002F12",[]]