[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-THUDM--LongBench":3,"tool-THUDM--LongBench":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":10,"env_os":100,"env_gpu":101,"env_ram":100,"env_deps":102,"category_tags":107,"github_topics":108,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":146},816,"THUDM\u002FLongBench","LongBench","LongBench v2 and LongBench (ACL 25'&24')","LongBench v2 是一个专为评估大语言模型长上下文处理能力而设计的基准测试集。它致力于解决现有评测难以衡量模型在超长文档中深度理解与逻辑推理能力的问题。LongBench 包含 503 道高难度选择题，上下文跨度从 8k 至 200 万字，覆盖单文档问答、多文档整合、代码库分析及结构化数据理解等六大真实场景。数据源自近 100 位专业人士，确保题目极具挑战性，即便人类专家借助搜索工具也难以在短时间内准确作答。\n\n这套基准非常适合 AI 研究人员和模型开发者使用，能够客观反映模型在复杂长文本任务中的实际水平。其独特之处在于极高的难度门槛以及对推理时间计算的探索，旨在为未来超人类长上下文 AI 系统的开发提供可靠标准。通过标准化多选题格式，LongBench 保证了评估的可靠性，帮助社区识别模型短板，推动长上下文技术的进步。","![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_readme_4b16b8547f07.gif)\n# 📚 LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks\n\u003Cp align=\"center\">\n    🌐 \u003Ca href=\"https:\u002F\u002Flongbench2.github.io\" target=\"_blank\">Project Page\u003C\u002Fa> • 📚 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15204\" target=\"_blank\">LongBench v2 Paper\u003C\u002Fa> • 📊 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench-v2\" target=\"_blank\">LongBench v2 Dataset\u003C\u002Fa> • 𝕏 \u003Ca href=\"https:\u002F\u002Fx.com\u002FrealYushiBai\u002Fstatus\u002F1869946577349132766\" target=\"_blank\">Thread\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    📖 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14508\" target=\"_blank\">LongBench Paper\u003C\u002Fa> • 🤗 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench\" target=\"_blank\">LongBench Dataset\u003C\u002Fa>\n\u003C\u002Fp>\n\n**📢 The original LongBench v1 related files are moved under `LongBench\u002F`, read its README [here](LongBench\u002FREADME.md)**.\n\nLongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.\n\nTo elaborate, LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repo understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of **enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2**.\n\n**🔍 With LongBench v2, we are eager to find out how scaling inference-time compute will affect deep understanding and reasoning in long-context scenarios. View our 🏆 leaderboard [here](https:\u002F\u002Flongbench2.github.io\u002F#leaderboard) (updating).**\n\n\u003Cdiv style=\"text-align: center;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_readme_5a15efd9df2d.png\" width=\"600\" \u002F>\n\u003C\u002Fdiv>\n\n\u003Cdiv style=\"text-align: center;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_readme_a1f11affb345.png\" width=\"700\" \u002F>\n\u003C\u002Fdiv>\n\n## 🔥 Updates\n🔥🔥🔥 **[2024\u002F01\u002F15]** More evaluation results added to our [leaderboard](https:\u002F\u002Flongbench2.github.io\u002F#leaderboard), including Gemini-Exp-1206, Gemini-2.0-Flash, DeepSeek-V3, and MiniMax-Text-01, Check them out!\n\n🔥🔥🔥 **[2024\u002F12\u002F20]** We are excited to release **LongBench v2**! Compared to the first generation of LongBench, LongBench v2 is much longer and much more challenging. Its goal is to provide a reliable evaluation standard for the development of future superhuman long-context AI systems.\n\n## ⚙️ How to evaluate on LongBench v2\n\n### Load Data\nYou can download and load the **LongBench v2** data through the Hugging Face datasets ([🤗 HF Repo](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench-v2)):\n```python\nfrom datasets import load_dataset\ndataset = load_dataset('THUDM\u002FLongBench-v2', split='train')\n```\nAlternatively, you can download the file from [this link](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench-v2\u002Fresolve\u002Fmain\u002Fdata.json) to load the data.\n\n### Data Format\n\nAll data in **LongBench v2** are standardized to the following format:\n\n```json\n{\n    \"_id\": \"Unique identifier for each piece of data\",\n    \"domain\": \"The primary domain category of the data\",\n    \"sub_domain\": \"The specific sub-domain category within the domain\",\n    \"difficulty\": \"The difficulty level of the task, either 'easy' or 'hard'\",\n    \"length\": \"The length category of the task, which can be 'short', 'medium', or 'long'\",\n    \"question\": \"The input\u002Fcommand for the task, usually short, such as questions in QA, queries in many-shot learning, etc\",\n    \"choice_A\": \"Option A\", \"choice_B\": \"Option B\", \"choice_C\": \"Option C\", \"choice_D\": \"Option D\",\n    \"answer\": \"The groundtruth answer, denoted as A, B, C, or D\",\n    \"context\": \"The long context required for the task, such as documents, books, code repositories, etc.\"\n}\n```\n\n### Evaluation\nInstall the requirements with pip: `pip install -r requirements.txt`.\n\nTo run model evaluation, first add your model path and its context window length to `config\u002F`, then follow these steps (we take [GLM-4-9B-Chat](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FGLM-4) for a running example):\n\n#### Step 1: Deploy the Model with vLLM\n\nFirst, deploy your model using [vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fserving\u002Fopenai_compatible_server.html). Run the following command to serve the model:\n\n```bash\nvllm serve THUDM\u002Fglm-4-9b-chat --api-key token-abc123 --tensor-parallel-size 4 --gpu-memory-utilization 0.95 --max_model_len 131072 --trust-remote-code\n```\n\n- `--tensor-parallel-size 4` specifies the number of tensor parallelism slices. It should be set to higher value, i.e., 8, to serve larger models such as [Llama-3.1-70B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.1-70B-Instruct) or [Qwen2.5-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2.5-72B-Instruct).\n- Adjust `--gpu-memory-utilization` to control GPU memory usage.\n- Set `--max_model_len` to the context window length of the model.\n\n#### Step 2: Run Model Inference\n\nOnce your model is deployed, modify the `URL` and `API_KEY` in `pred.py` to match your serving instance. Run the model inference with the following command:\n\n```bash\npython pred.py --model GLM-4-9B-Chat\n```\n- `--cot`: Enable evaluation under the Chain-of-Thought (CoT) setting.\n- `--no_context`: Test the model’s performance without the long context (pure memorization).\n- `--rag N`: Use top-N retrieved contexts during +RAG evaluation. This is set to 0 by default to disable RAG. For details on the retrieval process, refer to the [retrieve.py](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongCite\u002Fblob\u002Fmain\u002Futils\u002Fretrieve.py) file.\n\n#### Step 3: Export Results\n\nFinally, run `python result.py` to export the evaluation results.\n\n## 📝 Citation\n```\n@article{bai2024longbench2,\n  title={LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks}, \n  author={Yushi Bai and Shangqing Tu and Jiajie Zhang and Hao Peng and Xiaozhi Wang and Xin Lv and Shulin Cao and Jiazheng Xu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},\n  journal={arXiv preprint arXiv:2412.15204},\n  year={2024}\n}\n@inproceedings{bai2024longbench,\n    title = \"{L}ong{B}ench: A Bilingual, Multitask Benchmark for Long Context Understanding\",\n    author = \"Bai, Yushi and Lv, Xin  and Zhang, Jiajie  and Lyu, Hongchang  and\n      Tang, Jiankai  and Huang, Zhidian  and Du, Zhengxiao  and Liu, Xiao  and Zeng, Aohan  and Hou, Lei  and Dong, Yuxiao  and Tang, Jie  and Li, Juanzi\",\n    booktitle = \"Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n    month = aug,\n    year = \"2024\",\n    address = \"Bangkok, Thailand\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2024.acl-long.172\",\n    doi = \"10.18653\u002Fv1\u002F2024.acl-long.172\",\n    pages = \"3119--3137\",\n}\n```\n","![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_readme_4b16b8547f07.gif)\n# 📚 LongBench v2：迈向现实长上下文多任务中的深度理解与推理\n\u003Cp align=\"center\">\n    🌐 \u003Ca href=\"https:\u002F\u002Flongbench2.github.io\" target=\"_blank\">项目页面\u003C\u002Fa> • 📚 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.15204\" target=\"_blank\">LongBench v2 论文\u003C\u002Fa> • 📊 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench-v2\" target=\"_blank\">LongBench v2 数据集\u003C\u002Fa> • 𝕏 \u003Ca href=\"https:\u002F\u002Fx.com\u002FrealYushiBai\u002Fstatus\u002F1869946577349132766\" target=\"_blank\">推文\u003C\u002Fa>\n\u003C\u002Fp>\n\u003Cp align=\"center\">\n    📖 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.14508\" target=\"_blank\">LongBench 论文\u003C\u002Fa> • 🤗 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench\" target=\"_blank\">LongBench 数据集\u003C\u002Fa>\n\u003C\u002Fp>\n\n**📢 原始的 LongBench v1 相关文件已移至 `LongBench\u002F` 目录下，请在此处阅读其 README [这里](LongBench\u002FREADME.md)**。\n\nLongBench v2 旨在评估大语言模型（LLMs）处理需要**深度理解和推理**的现实世界多任务中长上下文问题的能力。LongBench v2 具有以下特性：(1) **长度**：上下文长度从 8k 到 2M 词不等，大部分在 128k 以下。(2) **难度**：极具挑战性，即使人类专家使用文档内的搜索工具也无法在短时间内正确回答。(3) **覆盖范围**：涵盖各种现实场景。(4) **可靠性**：全部采用多项选择题格式以确保评估的可靠性。\n\n具体来说，LongBench v2 包含 503 道具有挑战性的多项选择题，上下文长度从 8k 到 2M 词，涵盖六大主要任务类别：单文档问答（QA）、多文档问答（QA）、长上下文学习、长对话历史理解、代码仓库理解和长结构化数据理解。为了确保广度和实用性，我们从近 100 名具有不同专业背景的高学历人士那里收集数据。我们采用自动化和人工审查流程来保持高质量和高难度，导致人类专家在 15 分钟的时间限制下准确率仅为 53.7%。我们的评估显示，表现最好的模型直接回答问题时，准确率仅为 50.1%。相比之下，包含更长推理过程的 o1-preview 模型达到了 57.7%，超过了人类基线 4%。这些结果突显了**增强推理能力和扩展推理时间计算量以应对 LongBench v2 中长上下文挑战的重要性**。\n\n**🔍 借助 LongBench v2，我们渴望了解扩展推理时间计算量将如何影响长上下文场景中的深度理解和推理。查看我们的 🏆 排行榜 [这里](https:\u002F\u002Flongbench2.github.io\u002F#leaderboard)（持续更新中）。**\n\n\u003Cdiv style=\"text-align: center;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_readme_5a15efd9df2d.png\" width=\"600\" \u002F>\n\u003C\u002Fdiv>\n\n\u003Cdiv style=\"text-align: center;\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_readme_a1f11affb345.png\" width=\"700\" \u002F>\n\u003C\u002Fdiv>\n\n## 🔥 更新\n🔥🔥🔥 **[2024\u002F01\u002F15]** 更多评估结果添加到我们的 [排行榜](https:\u002F\u002Flongbench2.github.io\u002F#leaderboard)，包括 Gemini-Exp-1206, Gemini-2.0-Flash, DeepSeek-V3, and MiniMax-Text-01，快去看看吧！\n\n🔥🔥🔥 **[2024\u002F12\u002F20]** 我们很高兴发布 **LongBench v2**！与第一代 LongBench 相比，LongBench v2 更长且更具挑战性。其目标是为未来超人类长上下文 AI 系统的发展提供可靠的评估标准。\n\n## ⚙️ 如何在 LongBench v2 上进行评估\n\n### 加载数据\n您可以通过 Hugging Face 数据集下载并加载 **LongBench v2** 数据（[🤗 HF 仓库](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench-v2)）：\n```python\nfrom datasets import load_dataset\ndataset = load_dataset('THUDM\u002FLongBench-v2', split='train')\n```\n或者，您可以从 [此链接](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench-v2\u002Fresolve\u002Fmain\u002Fdata.json) 下载文件以加载数据。\n\n### 数据格式\n\n**LongBench v2** 中的所有数据均标准化为以下格式：\n\n```json\n{\n    \"_id\": \"Unique identifier for each piece of data\",\n    \"domain\": \"The primary domain category of the data\",\n    \"sub_domain\": \"The specific sub-domain category within the domain\",\n    \"difficulty\": \"The difficulty level of the task, either 'easy' or 'hard'\",\n    \"length\": \"The length category of the task, which can be 'short', 'medium', or 'long'\",\n    \"question\": \"The input\u002Fcommand for the task, usually short, such as questions in QA, queries in many-shot learning, etc\",\n    \"choice_A\": \"Option A\", \"choice_B\": \"Option B\", \"choice_C\": \"Option C\", \"choice_D\": \"Option D\",\n    \"answer\": \"The groundtruth answer, denoted as A, B, C, or D\",\n    \"context\": \"The long context required for the task, such as documents, books, code repositories, etc.\"\n}\n```\n\n### 评估\n使用 pip 安装依赖项：`pip install -r requirements.txt`。\n\n要运行模型评估，首先将您的模型路径及其上下文窗口长度添加到 `config\u002F` 中，然后按照以下步骤操作（我们以 [GLM-4-9B-Chat](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FGLM-4) 为例）：\n\n#### 步骤 1：使用 vLLM 部署模型\n\n首先，使用 [vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fserving\u002Fopenai_compatible_server.html) 部署您的模型。运行以下命令来提供服务：\n\n```bash\nvllm serve THUDM\u002Fglm-4-9b-chat --api-key token-abc123 --tensor-parallel-size 4 --gpu-memory-utilization 0.95 --max_model_len 131072 --trust-remote-code\n```\n\n- `--tensor-parallel-size 4` 指定张量并行切片数量。对于服务更大的模型如 [Llama-3.1-70B-Instruct](https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.1-70B-Instruct) 或 [Qwen2.5-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2.5-72B-Instruct)，应设置为更高的值，例如 8。\n- 调整 `--gpu-memory-utilization` 以控制 GPU 内存使用情况。\n- 将 `--max_model_len` 设置为模型的上下文窗口长度。\n\n#### 步骤 2：运行模型推理\n\n一旦您的模型部署完成，修改 `pred.py` 中的 `URL` 和 `API_KEY` 以匹配您的服务实例。使用以下命令运行模型推理：\n\n```bash\npython pred.py --model GLM-4-9B-Chat\n```\n- `--cot`：启用思维链（CoT）设置下的评估。\n- `--no_context`：测试模型在没有长上下文情况下的性能（纯记忆）。\n- `--rag N`：在检索增强生成（RAG）评估期间使用检索到的前 N 个上下文。默认设置为 0 以禁用 RAG。有关检索过程的详细信息，请参阅 [retrieve.py](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongCite\u002Fblob\u002Fmain\u002Futils\u002Fretrieve.py) 文件。\n\n#### 步骤 3：导出结果\n\n最后，运行 `python result.py` 以导出评估结果。\n\n## 📝 引用\n```\n@article{bai2024longbench2,\n  title={LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks}, \n  author={Yushi Bai and Shangqing Tu and Jiajie Zhang and Hao Peng and Xiaozhi Wang and Xin Lv and Shulin Cao and Jiazheng Xu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},\n  journal={arXiv preprint arXiv:2412.15204},\n  year={2024}\n}\n@inproceedings{bai2024longbench,\n    title = \"{L}ong{B}ench: A Bilingual, Multitask Benchmark for Long Context Understanding\",\n    author = \"Bai, Yushi and Lv, Xin  and Zhang, Jiajie  and Lyu, Hongchang  and\n      Tang, Jiankai  and Huang, Zhidian  and Du, Zhengxiao  and Liu, Xiao  and Zeng, Aohan  and Hou, Lei  and Dong, Yuxiao  and Tang, Jie  and Li, Juanzi\",\n    booktitle = \"Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n    month = aug,\n    year = \"2024\",\n    address = \"Bangkok, Thailand\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2024.acl-long.172\",\n    doi = \"10.18653\u002Fv1\u002F2024.acl-long.172\",\n    pages = \"3119--3137\",\n}\n```","# LongBench v2 快速上手指南\n\nLongBench v2 是一个旨在评估大语言模型（LLM）在真实多任务长上下文场景中深度理解与推理能力的基准测试。本指南将帮助您快速完成环境搭建与模型评估。\n\n## 环境准备\n\n- **系统要求**: 支持 Linux \u002F Windows \u002F macOS，需安装 Python 环境。\n- **硬件建议**: 推荐使用 GPU 进行模型部署（如使用 vLLM）。\n- **网络访问**: 需能访问 Hugging Face 以加载数据集（国内用户建议使用镜像源或配置代理）。\n- **依赖库**: 请确保已安装 `datasets` 等基础库。\n\n## 安装步骤\n\n1. **克隆代码仓库**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench.git\n   cd LongBench\n   ```\n\n2. **安装依赖**\n   根据项目根目录下的 `requirements.txt` 安装所需包：\n   ```bash\n   pip install -r requirements.txt\n   ```\n\n3. **配置模型参数**\n   在运行评估前，需在 `config\u002F` 目录下添加您的模型路径及其上下文窗口长度配置。\n\n## 基本使用\n\n### 1. 加载数据\n您可以通过 Hugging Face 直接加载数据集，或下载 JSON 文件本地加载：\n```python\nfrom datasets import load_dataset\ndataset = load_dataset('THUDM\u002FLongBench-v2', split='train')\n```\n\n### 2. 部署模型\n使用 [vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fserving\u002Fopenai_compatible_server.html) 部署您的模型服务。示例命令如下（以 GLM-4-9B-Chat 为例）：\n```bash\nvllm serve THUDM\u002Fglm-4-9b-chat --api-key token-abc123 --tensor-parallel-size 4 --gpu-memory-utilization 0.95 --max_model_len 131072 --trust-remote-code\n```\n> **注意**: 对于更大模型（如 Llama-3.1-70B），请将 `--tensor-parallel-size` 调整为更高值（如 8），并根据显存调整 `--gpu-memory-utilization`。\n\n### 3. 运行评估\n修改 `pred.py` 中的 `URL` 和 `API_KEY` 以匹配您的服务实例，然后执行推理：\n```bash\npython pred.py --model GLM-4-9B-Chat\n```\n常用参数说明：\n- `--cot`: 启用思维链（CoT）设置。\n- `--no_context`: 测试无长上下文下的纯记忆能力。\n- `--rag N`: 启用检索增强生成（RAG），默认关闭（N=0）。\n\n### 4. 导出结果\n评估完成后，运行以下命令导出最终结果：\n```bash\npython result.py\n```","某法律科技团队正在开发一款能处理百页合同的法律分析系统，急需验证模型在处理超长文档时的深度理解与推理能力。\n\n### 没有 LongBench 时\n- 缺乏高质量的公开长文本测试集，只能依赖内部脱敏数据，样本多样性严重不足且存在隐私风险\n- 评估方式主观性强，难以判断模型失败是因为上下文丢失还是逻辑推理错误\n- 缺少统一基准线，无法客观衡量自家模型相比业界主流方案的真实差距\n- 人工逐条审核答案效率极低，导致版本迭代周期漫长，错失市场窗口期\n\n### 使用 LongBench 后\n- 直接调用涵盖 8k 至 2M 字的标准化数据集，轻松模拟真实复杂的长文档场景且无需担心数据合规问题\n- 采用多选择题格式实现自动化可靠评分，精准区分上下文限制与推理能力瓶颈\n- 依托官方排行榜明确自身模型定位，针对性地增强推理模块而非盲目扩大参数\n- 建立自动化评估流水线，将测试时间从数天缩短至数小时，加速长文本优化策略落地\n\nLongBench 不仅提供了标准化的评估基准，更为长上下文模型的深度理解与推理能力提升指明了清晰的技术路径。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTHUDM_LongBench_4b16b854.gif","THUDM","THUKEG","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FTHUDM_698cabbc.png","ChatGLM, GLM-4, CogVLM, CodeGeeX, CogView, ImageReward, CogVideoX | CogDL, GraphMAE, AMiner | Zhipu.ai (Z.ai) & Knowledge Engineering Group (KEG)",null,"keg.cs.tsinghua@gmail.com","thukeg","https:\u002F\u002Fhuggingface.co\u002FTHUDM","https:\u002F\u002Fgithub.com\u002FTHUDM",[85,89,93],{"name":86,"color":87,"percentage":88},"Python","#3572A5",89.7,{"name":90,"color":91,"percentage":92},"TeX","#3D6117",5.2,{"name":94,"color":95,"percentage":92},"Shell","#89e051",1140,122,"2026-04-04T12:25:10","MIT","未说明","需要 NVIDIA GPU，支持 Tensor Parallelism，显存利用率可配置",{"notes":103,"python":100,"dependencies":104},"需使用 vLLM 部署模型服务；支持 CoT 和 RAG 评估模式；上下文长度最高达 2M 词；数据来自 HuggingFace。",[105,106],"datasets","vllm",[13,26,54],[109,110,111,112],"benchmark","llm","longtext","long-context","2026-03-27T02:49:30.150509","2026-04-06T05:16:14.673873",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},3520,"超长文本推理时遇到 OOM（显存溢出）问题该如何解决？","推理时使用 Flash Attention 可以大幅降低显存占用。在 ChatGLM 的实现中，PyTorch 版本大于 2.0 会自动开启该功能。如果是训练场景，由于需要存储梯度、fp32 参数拷贝等，显存消耗会比推理大很多；建议在 8 卡 80G A100 上开启 DeepSpeed Zero3 和 gradient_checkpointing 以支持 32k 长度的训练。单卡 A100 通常可正常运行 ChatGLM 推理。","https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench\u002Fissues\u002F17",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},3521,"运行 Llama2 时报错 cu_seqlens_q shape 错误或 OOM 怎么办？","这通常与 FlashAttention 的安装环境有关。如果遇到 `RuntimeError: cu_seqlens_q must have shape (batch_size + 1)` 报错，建议重新安装 FlashAttention。也有用户反馈关闭 FlashAttention 后反而能运行，具体原因可能涉及环境兼容性，重新安装 FlashAttention 通常可解决问题。","https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench\u002Fissues\u002F43",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},3522,"为什么复现 ChatGLM3 的 Benchmark 分数远低于官方 README 数据？","官方发布的 Benchmark 中各模型使用的是 Greedy Search 解码方式（top_p=0, temperature=1）。请检查您的 generation_config 参数是否一致。此外，确保使用最新版本的代码，旧版本可能存在 build_chat 部分的问题导致分数差异，更新代码后分数应与官方一致。","https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench\u002Fissues\u002F59",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},3523,"复现 Dureader 任务结果与官方数据有差距如何解决？","此前 pred.py 代码中存在版本传递错误的情况，现已修正。如果您发现结果与官方有差距，请确保拉取了最新的源码并更新了相关代码文件，特别是 pred.py 部分。","https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench\u002Fissues\u002F37",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},3524,"数据集缺少 passage_retrieval 相关文件怎么办？","数据集已在 HuggingFace 仓库更新。请访问 https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FTHUDM\u002FLongBench 下载最新版本的数据。注意目前仅更新了 `passage_retrieval_zh` 文件，您不需要重新评估其他数据集。","https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench\u002Fissues\u002F4",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},3525,"如何对 Qwen2.5 系列模型进行支持 YaRN 的 LongBench v2 评估？","需要修改模型的 `config.json` 并启用 YaRN 技术。请参考 Qwen2.5 官方文档关于处理长文本的配置说明：https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2.5-72B-Instruct#processing-long-texts。建议使用 vllm serving 进行服务部署和评估。","https:\u002F\u002Fgithub.com\u002FTHUDM\u002FLongBench\u002Fissues\u002F91",[]]