[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-HazyResearch--legalbench":3,"similar-HazyResearch--legalbench":87},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":14,"owner_avatar_url":15,"owner_bio":16,"owner_company":17,"owner_location":17,"owner_email":18,"owner_twitter":17,"owner_website":19,"owner_url":20,"languages":21,"stars":30,"forks":31,"last_commit_at":32,"license":17,"difficulty_score":33,"env_os":34,"env_gpu":34,"env_ram":34,"env_deps":35,"category_tags":38,"github_topics":17,"view_count":41,"oss_zip_url":17,"oss_zip_packed_at":17,"status":42,"created_at":43,"updated_at":44,"faqs":45,"releases":86},3350,"HazyResearch\u002Flegalbench","legalbench","An open science effort to benchmark legal reasoning in foundation models","LegalBench 是一个致力于评估大语言模型法律推理能力的开源基准项目。它由计算机科学家与法律专家联手打造，旨在解决当前 AI 在法律领域缺乏统一、专业评估标准的问题，帮助人们厘清模型在处理复杂法律文本时的真实能力边界与安全可靠性。\n\n该项目汇集了来自律师、法学教授及法律科技从业者等 40 多位贡献者精心设计的 162 项任务，涵盖从判断证据是否属于“传闻”、提取法律术语定义，到回答具体法律条文问答等多种场景。这些任务不仅模拟了法学院学生的考核内容，也反映了法律从业者的实际工作需求，具有极高的实用价值。\n\nLegalBench 特别适合 AI 研究人员、法律科技开发者以及关注法律人工智能应用的学者使用。通过提供多样化的输入输出对，它能系统地测试模型在不同法律领域、任务结构及难度层级下的表现。其独特的亮点在于采用了法律社区的众包模式构建数据集，确保了任务的专业性与代表性，同时也为算法创新提供了丰富的挑战场景。作为一个持续更新的开放科学项目，LegalBench 正不断吸纳新任务，推动法律人工智能向更严谨、更实用的方向发展。","# 📜 LegalBench\n\n\u003Cdiv align=\"center\">\n\nThe LegalBench project is an ongoing open science effort to collaboratively curate tasks for evaluating legal reasoning in English large language models (LLMs). The benchmark currently consists of 162 tasks gathered from 40 contributors.\n\n[**Website**](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002F)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**Data**](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fnguha\u002Flegalbench)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**Paper**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11462)\n\u003C\u002Fdiv>\n\n\n## What is LegalBench?\n\nLegalBench is a benchmark consisting of different legal reasoning *tasks*. Each task has an associated *dataset*, consisting of input-output pairs. Examples of tasks include:\n\n- The [hearsay](.\u002Ftasks\u002Fhearsay\u002FREADME.md) task, for which the input is a description of some evidence and the output is whether or not that evidence would be considered hearsay (i.e., \"Yes\" or \"No\").\n- The [definition extraction](.\u002Ftasks\u002Fdefinition_extraction\u002FREADME.md) task, for which the input is a sentence (from a Supreme Court opinion) which defines a term, and the output is the term.\n- The [Rule QA](.\u002Ftasks\u002Frule_qa\u002FREADME.md) task, for which the input is a question about the substance of a law, and the output is the correct answer to the question.\n\nTask datasets can be used to evaluate LLMs by providing the LLM with the input, and evaluating how frequently it generates the corresponding output. LegalBench tasks cover a wide range of textual types, task structures, legal domains, and difficulty levels. Descriptions of each task are available [here](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Ftasks\u002F).\n\nNotably, LegalBench tasks have been assembled through a unique crowd-sourcing effort within the legal community. [Individuals and organizations](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002F#contributors) from a broad range of legal backgrounds---lawyers, computational legal practitioners, law professors, and legal impact labs---have contributed tasks they see as \"interesting\" or \"useful.\" Interesting tasks are those that require a type of reasoning that the contributor deemed to be worth measuring. For instance, the task might correspond to one that law students are frequently expected to perform as part of assessments. Useful tasks correspond to processes that legal professionals currently engage in (either manually or through other means), and thus represent potential practical applications for LLMs.\n\n**LegalBench is ongoing and we are always looking to incorporate more tasks. See [here](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Fcontribute\u002F) for more information on how to get involved!**\n\n\n## Who are we?\n\nWe're an [interdisciplinary team]([#contributors](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002F#contributors)) of computer scientists and lawyers spanning academia and industry, interested in understanding the types of legal tasks that modern language models are capable of solving. To do so, we've been accumulating and constructing a diverse collection of legal NLP tasks---all of which are [available]([\u002Flegalbench\u002Ftasks\u002F](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Ftasks\u002F)) in this repository. We have two goals for this project:\n\n1. First, we'd like to use these datasets to continually evaluate large language models for tasks involving legal reasoning and legal text. In particular, we're excited by the idea that the unique challenges posed by legal text may inspire new algorithmic innovations.\n2. Second, we'd like to use these datasets to guide legal practitioners and academics as they seek to understand to the safety and reliability implications of these models in their daily workflows.\n\nOur approach to building LegalBench is inspired by contemporaneous open-science efforts for democratizing participation in machine learning development (e.g [HELM](https:\u002F\u002Fcrfm.stanford.edu\u002Fhelm\u002Flatest\u002F), [BigBench](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FBIG-bench)).\n\n\n## Contributing a task\n\nPlease see [here](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Fcontribute\u002F) for more details.\n\n\n## Evaluating on LegalBench tasks\n\nPlease see [here](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Fgetting-started\u002F) for more details.\n\n\n## Licenses\n\nLegalBench is a mix of created and transformed datasets. We ask that you follow the license of the dataset creator. Please see the [task page](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Ftree\u002Fmain\u002Ftasks) for a list of tasks and licenses.\n\nPlease see the [notebook](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fblob\u002Fmain\u002FUsingLegalBench.ipynb) for an example of how to select tasks based on license information.\n\n## Recent work involving LegalBench\n\nWe'd like to highlight community efforts building on LegalBench. If you've worked with LegalBench and would like us to add a pointer to your work here, please get in touch! \n\nProjects\u002Fevaluation frameworks:\n- [vals.ai](https:\u002F\u002Fwww.vals.ai\u002F)\n- Stanford Center for Foundation Model Research's [HELM Lite Benchmark](https:\u002F\u002Fcrfm.stanford.edu\u002Fhelm\u002Flite\u002Flatest\u002F)\n- [Reexpress AI: Uncertainty-aware Legal Reasoning](https:\u002F\u002Fgithub.com\u002FReexpressAI\u002FExample_Data\u002Fblob\u002Fmain\u002Ftutorials\u002Ftutorial8_legalbench\u002FREADME.md)\n\nResearch:\n- Nihal V. Nayak, Yiyang Nan, Avi Trost, & Stephen H. Bach. [Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18334) (2024)\n- Sergio Servantez, Joe Barrow, Kristian Hammond, & Rajiv Jain. [Chain of Logic: Rule-Based Reasoning with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10400) (2024).\n\n## Citing this work\n\nPlease include all citations below, which credit all sources LegalBench draws on.\n\n```text\n@misc{guha2023legalbench,\n      title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, \n      author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},\n      year={2023},\n      eprint={2308.11462},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n@article{koreeda2021contractnli,\n  title={ContractNLI: A dataset for document-level natural language inference for contracts},\n  author={Koreeda, Yuta and Manning, Christopher D},\n  journal={arXiv preprint arXiv:2110.01799},\n  year={2021}\n}\n@article{hendrycks2021cuad,\n  title={Cuad: An expert-annotated nlp dataset for legal contract review},\n  author={Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},\n  journal={arXiv preprint arXiv:2103.06268},\n  year={2021}\n}\n@article{wang2023maud,\n  title={MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding},\n  author={Wang, Steven H and Scardigli, Antoine and Tang, Leonard and Chen, Wei and Levkin, Dimitry and Chen, Anya and Ball, Spencer and Woodside, Thomas and Zhang, Oliver and Hendrycks, Dan},\n  journal={arXiv preprint arXiv:2301.00876},\n  year={2023}\n}\n@inproceedings{wilson2016creation,\n  title={The creation and analysis of a website privacy policy corpus},\n  author={Wilson, Shomir and Schaub, Florian and Dara, Aswarth Abhilash and Liu, Frederick and Cherivirala, Sushain and Leon, Pedro Giovanni and Andersen, Mads Schaarup and Zimmeck, Sebastian and Sathyendra, Kanthashree Mysore and Russell, N Cameron and others},\n  booktitle={Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},\n  pages={1330--1340},\n  year={2016}\n}\n@inproceedings{zheng2021does,\n  title={When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings},\n  author={Zheng, Lucia and Guha, Neel and Anderson, Brandon R and Henderson, Peter and Ho, Daniel E},\n  booktitle={Proceedings of the eighteenth international conference on artificial intelligence and law},\n  pages={159--168},\n  year={2021}\n}\n@article{zimmeck2019maps,\n  title={Maps: Scaling privacy compliance analysis to a million apps},\n  author={Zimmeck, Sebastian and Story, Peter and Smullen, Daniel and Ravichander, Abhilasha and Wang, Ziqi and Reidenberg, Joel R and Russell, N Cameron and Sadeh, Norman},\n  journal={Proc. Priv. Enhancing Tech.},\n  volume={2019},\n  pages={66},\n  year={2019}\n}\n@article{ravichander2019question,\n  title={Question answering for privacy policies: Combining computational and legal perspectives},\n  author={Ravichander, Abhilasha and Black, Alan W and Wilson, Shomir and Norton, Thomas and Sadeh, Norman},\n  journal={arXiv preprint arXiv:1911.00841},\n  year={2019}\n}\n@article{holzenberger2021factoring,\n  title={Factoring statutory reasoning as language understanding challenges},\n  author={Holzenberger, Nils and Van Durme, Benjamin},\n  journal={arXiv preprint arXiv:2105.07903},\n  year={2021}\n}\n@article{lippi2019claudette,\n  title={CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service},\n  author={Lippi, Marco and Pa{\\l}ka, Przemys{\\l}aw and Contissa, Giuseppe and Lagioia, Francesca and Micklitz, Hans-Wolfgang and Sartor, Giovanni and Torroni, Paolo},\n  journal={Artificial Intelligence and Law},\n  volume={27},\n  pages={117--139},\n  year={2019},\n  publisher={Springer}\n}\n```\n\n## Contact\n\nFor questions, concerns, or comments, please reach out to Neel (nguha@stanford.edu).\n","# 📜 LegalBench\n\n\u003Cdiv align=\"center\">\n\nLegalBench 项目是一项持续的开放科学倡议，旨在协作整理用于评估英语大型语言模型（LLMs）法律推理能力的任务。该基准目前由来自40位贡献者的162个任务组成。\n\n[**网站**](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002F)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**数据**](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fnguha\u002Flegalbench)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[**论文**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11462)\n\u003C\u002Fdiv>\n\n\n## 什么是 LegalBench？\n\nLegalBench 是一个由不同法律推理 *任务* 组成的基准测试。每个任务都配有一个相关的 *数据集*，由输入-输出对构成。任务示例包括：\n\n- [传闻证据](.\u002Ftasks\u002Fhearsay\u002FREADME.md) 任务：输入是一段关于证据的描述，输出则是该证据是否会被视为传闻证据（即“是”或“否”）。\n- [定义提取](.\u002Ftasks\u002Fdefinition_extraction\u002FREADME.md) 任务：输入是一句最高法院判决中的句子，其中定义了一个术语，输出则是该术语。\n- [规则问答](.\u002Ftasks\u002Frule_qa\u002FREADME.md) 任务：输入是一道关于某项法律实质内容的问题，输出则是该问题的正确答案。\n\n可以通过向 LLM 提供输入，并评估其生成对应输出的频率，来使用这些任务数据集对 LLM 进行评估。LegalBench 中的任务涵盖了广泛的文本类型、任务结构、法律领域以及难度级别。每项任务的详细说明可在 [这里](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Ftasks\u002F) 查阅。\n\n值得注意的是，LegalBench 的任务是通过法律界内独特的众包方式汇集而成。来自广泛法律背景的个人和组织——律师、计算法律从业者、法学教授以及法律影响实验室——都贡献了他们认为“有趣”或“有用”的任务。所谓“有趣”的任务，是指那些需要某种特定推理能力，而贡献者认为值得衡量的任务；例如，这类任务可能对应于法学院学生在考核中经常被要求完成的内容。而“有用”的任务则与法律专业人士当前正在执行的流程相关，无论是手动操作还是借助其他工具，因此它们代表了 LLM 在实际应用中的潜在场景。\n\n**LegalBench 仍在持续发展中，我们始终欢迎更多任务的加入。如需了解如何参与，请访问 [这里](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Fcontribute\u002F)！**\n\n\n## 我们是谁？\n\n我们是一个由计算机科学家和律师组成的[跨学科团队]([#contributors](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002F#contributors))，成员来自学术界和业界，致力于理解现代语言模型能够解决的各类法律任务。为此，我们一直在积累并构建多样化的法律 NLP 任务集合——所有这些任务均可在本仓库的 [这里](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Ftasks\u002F) 找到。我们为该项目设定了两个目标：\n\n1. 首先，我们希望利用这些数据集持续评估大型语言模型在法律推理和法律文本相关任务上的表现。尤其令人兴奋的是，法律文本所特有的挑战或许能激发新的算法创新。\n2. 其次，我们希望借助这些数据集，指导法律从业者和学者更好地理解这些模型在其日常工作流程中的安全性和可靠性影响。\n\n我们构建 LegalBench 的方法受到当代开放科学运动的启发，这些运动致力于推动机器学习开发的民主化参与（例如 [HELM](https:\u002F\u002Fcrfm.stanford.edu\u002Fhelm\u002Flatest\u002F) 和 [BigBench](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FBIG-bench)）。\n\n\n## 贡献一个任务\n\n详情请参见 [这里](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Fcontribute\u002F)。\n\n\n## 在 LegalBench 任务上进行评估\n\n详情请参见 [这里](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Fgetting-started\u002F)。\n\n\n## 许可证\n\nLegalBench 包含原创和转换后的数据集。我们要求您遵守数据集原作者的许可证规定。请参阅 [任务页面](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Ftree\u002Fmain\u002Ftasks) 以获取任务及其许可证的列表。\n\n有关如何根据许可证信息选择任务的示例，请参阅 [笔记本](https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fblob\u002Fmain\u002FUsingLegalBench.ipynb)。\n\n\n## 最近与 LegalBench 相关的工作\n\n我们希望重点介绍社区基于 LegalBench 开展的工作。如果您曾使用 LegalBench 并希望在此处添加指向您工作的链接，请随时与我们联系！\n\n项目\u002F评估框架：\n- [vals.ai](https:\u002F\u002Fwww.vals.ai\u002F)\n- 斯坦福基础模型研究中心的 [HELM Lite 基准](https:\u002F\u002Fcrfm.stanford.edu\u002Fhelm\u002Flite\u002Flatest\u002F)\n- [Reexpress AI：不确定性感知的法律推理](https:\u002F\u002Fgithub.com\u002FReexpressAI\u002FExample_Data\u002Fblob\u002Fmain\u002Ftutorials\u002Ftutorial8_legalbench\u002FREADME.md)\n\n研究：\n- Nihal V. Nayak、Yiyang Nan、Avi Trost 和 Stephen H. Bach。[学习生成指令微调数据集以实现零样本任务适配](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.18334)（2024年）\n- Sergio Servantez、Joe Barrow、Kristian Hammond 和 Rajiv Jain。[逻辑链：基于规则的大语言模型推理](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.10400)（2024年）。\n\n## 引用本工作\n\n请包含以下所有引用，以感谢 LegalBench 所依赖的所有来源。\n\n```text\n@misc{guha2023legalbench,\n      title={LegalBench：用于衡量大型语言模型法律推理能力的协作构建基准}, \n      author={Neel Guha 和 Julian Nyarko、Daniel E. Ho、Christopher Ré、Adam Chilton、Aditya Narayana、Alex Chohlas-Wood、Austin Peters、Brandon Waldon、Daniel N. Rockmore、Diego Zambrano、Dmitry Talisman、Enam Hoque、Faiz Surani、Frank Fagan、Galit Sarfaty、Gregory M. Dickinson、Haggai Porat、Jason Hegland、Jessica Wu、Joe Nudell、Joel Niklaus、John Nay、Jonathan H. Choi、Kevin Tobia、Margaret Hagan、Megan Ma、Michael Livermore、Nikon Rasumov-Rahe、Nils Holzenberger、Noam Kolt、Peter Henderson、Sean Rehaag、Sharad Goel、Shang Gao、Spencer Williams、Sunny Gandhi、Tom Zur、Varun Iyer、Zehua Li},\n      year={2023},\n      eprint={2308.11462},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n@article{koreeda2021contractnli,\n  title={ContractNLI：面向合同的文档级自然语言推理数据集},\n  author={Koreeda, Yuta 和 Manning, Christopher D},\n  journal={arXiv 预印本 arXiv:2110.01799},\n  year={2021}\n}\n@article{hendrycks2021cuad,\n  title={Cuad：用于法律合同审查的专家标注 NLP 数据集},\n  author={Hendrycks, Dan 和 Burns, Collin、Chen, Anya、Ball, Spencer},\n  journal={arXiv 预印本 arXiv:2103.06268},\n  year={2021}\n}\n@article{wang2023maud,\n  title={MAUD：用于并购协议理解的专家标注法律 NLP 数据集},\n  author={Wang, Steven H 和 Scardigli, Antoine、Tang, Leonard、Chen, Wei、Levkin, Dimitry、Chen, Anya、Ball, Spencer、Woodside, Thomas、Zhang, Oliver、Hendrycks, Dan},\n  journal={arXiv 预印本 arXiv:2301.00876},\n  year={2023}\n}\n@inproceedings{wilson2016creation,\n  title={网站隐私政策语料库的创建与分析},\n  author={Wilson, Shomir 和 Schaub, Florian、Dara, Aswarth Abhilash、Liu, Frederick、Cherivirala, Sushain、Leon, Pedro Giovanni、Andersen, Mads Schaarup、Zimmeck, Sebastian、Sathyendra, Kanthashree Mysore、Russell, N Cameron 等},\n  booktitle={计算语言学协会第54届年会论文集（第1卷：长文）},\n  pages={1330--1340},\n  year={2016}\n}\n@inproceedings{zheng2021does,\n  title={预训练何时有用？评估法律领域的自监督学习及包含53,000余条法律判例的 CaseHold 数据集},\n  author={Zheng, Lucia 和 Guha, Neel、Anderson, Brandon R、Henderson, Peter、Ho, Daniel E},\n  booktitle={第18届国际人工智能与法律会议论文集},\n  pages={159--168},\n  year={2021}\n}\n@article{zimmeck2019maps,\n  title={MAPS：将隐私合规性分析扩展至百万款应用},\n  author={Zimmeck, Sebastian 和 Story, Peter、Smullen, Daniel、Ravichander, Abhilasha、Wang, Ziqi、Reidenberg, Joel R、Russell, N Cameron、Sadeh, Norman},\n  journal={隐私增强技术会议论文集},\n  volume={2019},\n  pages={66},\n  year={2019}\n}\n@article{ravichander2019question,\n  title={隐私政策问答：结合计算与法律视角},\n  author={Ravichander, Abhilasha 和 Black, Alan W、Wilson, Shomir、Norton, Thomas、Sadeh, Norman},\n  journal={arXiv 预印本 arXiv:1911.00841},\n  year={2019}\n}\n@article{holzenberger2021factoring,\n  title={将成文法推理分解为语言理解挑战},\n  author={Holzenberger, Nils 和 Van Durme, Benjamin},\n  journal={arXiv 预印本 arXiv:2105.07903},\n  year={2021}\n}\n@article{lippi2019claudette,\n  title={CLAUDETTE：在线服务条款中潜在不公平条款的自动化检测器},\n  author={Lippi, Marco 和 Pa{\\l}ka, Przemys{\\l}aw、Contissa, Giuseppe、Lagioia, Francesca、Micklitz, Hans-Wolfgang、Sartor, Giovanni、Torroni, Paolo},\n  journal={人工智能与法律},\n  volume={27},\n  pages={117--139},\n  year={2019},\n  publisher={Springer}\n}\n```\n\n## 联系方式\n\n如有任何问题、疑虑或意见，请联系 Neel（nguha@stanford.edu）。","# LegalBench 快速上手指南\n\nLegalBench 是一个协作构建的开源基准测试项目，旨在评估大型语言模型（LLM）在法律推理任务中的表现。该基准包含 162 个由法律社区众包贡献的任务，涵盖传闻证据判断、定义提取、规则问答等多种法律场景。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows (推荐 WSL2)\n*   **Python 版本**：Python 3.8 或更高版本\n*   **依赖库**：\n    *   `datasets` (Hugging Face)\n    *   `transformers` (可选，用于加载模型进行评测)\n    *   `pandas`, `numpy` (用于数据处理)\n\n建议创建一个独立的虚拟环境以避免依赖冲突：\n\n```bash\npython -m venv legalbench-env\nsource legalbench-env\u002Fbin\u002Factivate  # Windows 用户使用: legalbench-env\\Scripts\\activate\n```\n\n安装核心依赖：\n\n```bash\npip install datasets transformers pandas numpy\n```\n\n> **国内加速提示**：如果下载 Hugging Face 数据集受阻，建议配置国内镜像源或使用代理。\n> 设置环境变量使用国内镜像：\n> ```bash\n> export HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com\n> ```\n\n## 安装步骤\n\nLegalBench 主要通过 Hugging Face Datasets 库直接加载，无需克隆整个 GitHub 仓库即可使用数据。如果您需要查看具体的任务描述文件或贡献代码，可以克隆仓库。\n\n### 方式一：直接通过 Hugging Face 加载（推荐）\n\n无需额外安装 `legalbench` 包，直接使用 `datasets` 库即可拉取数据。\n\n```bash\npip install datasets\n```\n\n### 方式二：克隆源代码仓库（可选）\n\n如果您需要浏览本地任务文档或运行特定的评估脚本：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench.git\ncd legalbench\n```\n\n## 基本使用\n\nLegalBench 的核心用法是通过 Hugging Face `datasets` 库加载特定的法律任务数据集。每个任务都是一个独立的子集。\n\n### 1. 加载特定任务数据集\n\n以下示例展示如何加载 \"hearsay\"（传闻证据判断）任务。该任务的输入是证据描述，输出是判断是否为传闻证据（Yes\u002FNo）。\n\n```python\nfrom datasets import load_dataset\n\n# 加载 hearsay 任务数据集\n# 注意：'nguha\u002Flegalbench' 是数据集名称，'hearsay' 是具体的任务配置名\ndataset = load_dataset(\"nguha\u002Flegalbench\", \"hearsay\")\n\n# 查看训练集的一条样本\nprint(dataset[\"train\"][0])\n```\n\n**输出示例：**\n```text\n{'input': 'The witness testified that he heard the defendant say, \"I did it.\"', 'output': 'No', 'task_name': 'hearsay'}\n```\n\n### 2. 遍历其他任务\n\nLegalBench 包含 162 个任务。您可以更改第二个参数来加载不同的任务，例如 \"definition_extraction\"（定义提取）或 \"rule_qa\"（规则问答）。\n\n```python\n# 加载定义提取任务\ndef_ext_dataset = load_dataset(\"nguha\u002Flegalbench\", \"definition_extraction\")\n\n# 查看第一条数据\nsample = def_ext_dataset[\"train\"][0]\nprint(f\"输入句子：{sample['input']}\")\nprint(f\"提取术语：{sample['output']}\")\n```\n\n### 3. 构建简单的评估循环\n\n您可以将输入提供给 LLM，并比较其生成结果与标准输出（`output` 字段）。\n\n```python\n# 伪代码示例：模拟评估过程\ndef evaluate_sample(model, input_text):\n    # 调用您的模型生成答案\n    prediction = model.generate(input_text) \n    return prediction\n\n# 假设已加载数据集\ntest_data = dataset[\"test\"]\n\ncorrect_count = 0\ntotal_count = len(test_data)\n\nfor item in test_data:\n    input_text = item[\"input\"]\n    ground_truth = item[\"output\"]\n    \n    # 获取模型预测 (此处需替换为实际模型调用逻辑)\n    # prediction = evaluate_sample(my_llm, input_text) \n    prediction = \"No\" # 占位符\n    \n    if prediction.strip().lower() == ground_truth.strip().lower():\n        correct_count += 1\n\naccuracy = correct_count \u002F total_count\nprint(f\"Accuracy on hearsay task: {accuracy:.2f}\")\n```\n\n### 4. 查看可用任务列表\n\n若不确定任务名称，可访问 [LegalBench 任务列表页面](https:\u002F\u002Fhazyresearch.stanford.edu\u002Flegalbench\u002Ftasks\u002F) 查询所有 162 个任务的详细名称和描述，或在代码中尝试加载时查看报错信息中列出的可用配置。\n\n> **许可说明**：LegalBench 是由多个不同许可证的数据集混合而成。在使用特定任务前，请务必查阅该任务对应的许可证信息（可在 GitHub 仓库的 `tasks` 目录或 Hugging Face 数据集卡片中找到），并遵守原作者的许可协议。","某法律科技团队正在研发一款面向初级律师的 AI 辅助工具，旨在自动分析案件证据并识别其中的“传闻证据”（hearsay），以减轻人工审阅负担。\n\n### 没有 legalbench 时\n- 团队缺乏统一的评估标准，只能随意选取几个网络案例测试模型，无法判断其在真实法律场景下的推理能力是否达标。\n- 难以发现模型在特定法律任务（如定义提取或规则问答）上的隐性缺陷，导致上线后频繁出现看似合理实则错误的“幻觉”回答。\n- 不同版本的模型迭代缺乏量化对比依据，开发人员仅凭主观感觉调整参数，效率低下且方向模糊。\n- 由于缺少来自律师、教授等多方贡献的专业任务集，模型训练数据覆盖面窄，难以处理复杂多变的法律文书结构。\n\n### 使用 legalbench 后\n- 团队直接调用 legalbench 中成熟的“传闻证据识别”等 162 个任务数据集，对模型进行标准化测试，迅速定位其在法律推理上的具体短板。\n- 通过涵盖不同难度和法律领域的任务组合，全面暴露模型在专业术语理解和逻辑推导中的风险点，显著降低生产环境的出错率。\n- 利用统一的基准分数清晰对比各版本模型表现，让算法优化过程有据可依，大幅缩短研发迭代周期。\n- 受益于法律社区众包构建的高质量任务库，模型接触到更多贴近实务的“有趣”与“有用”场景，提升了处理真实案件的鲁棒性。\n\nlegalbench 将原本模糊的法律 AI 评估转化为可量化的科学基准，帮助开发者在安全可控的前提下释放法律大模型的真正潜力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FHazyResearch_legalbench_308cb169.png","HazyResearch","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FHazyResearch_5f558f19.png","We are a CS research group led by Prof. Chris Ré.",null,"contact.hazy@gmail.com","https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Fchrismre\u002F","https:\u002F\u002Fgithub.com\u002FHazyResearch",[22,26],{"name":23,"color":24,"percentage":25},"Python","#3572A5",61,{"name":27,"color":28,"percentage":29},"Jupyter Notebook","#DA5B0B",39,567,85,"2026-04-04T02:13:35",1,"",{"notes":36,"python":34,"dependencies":37},"README 中未提供具体的运行环境需求（如操作系统、GPU、内存、Python 版本或依赖库）。该项目主要是一个包含 162 个法律推理任务的数据集集合，用于评估大语言模型。具体的评估代码、环境配置及依赖项需参考项目中提到的外部链接（如 'Getting Started' 页面或示例 Notebook）。数据托管在 Hugging Face 上，使用时需遵循各子数据集的特定许可证要求。",[],[39,40],"语言模型","其他",2,"ready","2026-03-27T02:49:30.150509","2026-04-06T05:44:28.017056",[46,51,56,61,66,71,76,81],{"id":47,"question_zh":48,"answer_zh":49,"source_url":50},15384,"加载数据集时出现 'size mismatch'（大小不匹配）错误怎么办？","这通常是由于任务更新时的临时 bug 导致的。维护者已修复该问题，请尝试重新运行加载代码。如果问题依旧，请确保使用的是最新版本的 datasets 库。示例代码：\ndataset = load_dataset(\"nguha\u002Flegalbench\", \"unfair_tos\", split=datasets.Split.TEST, trust_remote_code=True)","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F37",{"id":52,"question_zh":53,"answer_zh":54,"source_url":55},15385,"为什么 rule_qa 或 scalr 任务的训练数据（train.tsv）是空的？","这是设计使然。`rule_qa` 和 `scalr` 是零样本（zero-shot）任务，因此不提供训练样本。`rule_qa` 的测试集答案可以在 Huggingface 仓库中找到：https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fnguha\u002Flegalbench\u002Fblob\u002Fmain\u002Fdata\u002Frule_qa\u002Ftest.tsv。如果需要评估用的思维链生成结果，请访问项目官网查看。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F38",{"id":57,"question_zh":58,"answer_zh":59,"source_url":60},15386,"如何获取 LegalBench 的完整数据集？","完整的 LegalBench 数据集已发布在 Huggingface 上，可以通过以下地址访问和下载所有任务数据：https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fnguha\u002Flegalbench。GitHub 仓库中的部分文件可能仅作为示例或占位符。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F39",{"id":62,"question_zh":63,"answer_zh":64,"source_url":65},15387,"在哪里可以查看 LegalBench 的公共排行榜或模型性能数据？","公共排行榜可以在 https:\u002F\u002Fwww.legalevalhub.ai\u002F 查看。此外，论文附录中也提供了详细的模型性能数据，论文地址为：https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.11462。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F32",{"id":67,"question_zh":68,"answer_zh":69,"source_url":70},15388,"如何获取基准测试中使用的具体提示词（prompts）和模型输出？","每个任务的提示词模板都存储在该任务的文件夹中（例如 `tasks\u002Fabercrombie\u002Fbase_prompt.txt`），文件中包含用于填充特定列值的占位符。关于如何使用这些提示词的详细示例，请参考官方 Notebook：https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fblob\u002Fmain\u002FUsingLegalBench.ipynb。评估时，每个样本均使用相同的提示词模板。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F31",{"id":72,"question_zh":73,"answer_zh":74,"source_url":75},15389,"rule_qa 任务数据中的 'slice' 列是什么意思？","`slice` 列主要用于对结果进行进一步分析。例如，可以用来评估模型在回答“民事程序”（Civil Procedure）类问题时是否比回答“知识产权”（IP）类问题表现更好。相关描述已更新至 README 文件中。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F12",{"id":77,"question_zh":78,"answer_zh":79,"source_url":80},15390,"部分 CUAD 任务缺少 base_prompts、README 或测试数据文件怎么办？","这之前是由于合并代码时的失误导致的文件缺失问题，维护者已经修复。请拉取最新的代码仓库即可找到缺失的 `base_prompts`、README 文件和测试数据。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F30",{"id":82,"question_zh":83,"answer_zh":84,"source_url":85},15391,"该项目目前是否还在活跃维护中？","是的，项目仍然活跃。维护团队已完成第一轮任务收集，并正在准备发布及进行大语言模型基准测试。项目持续欢迎新的任务提交，但在初始发布后提交的任务可能需要等待后续版本才能被纳入。","https:\u002F\u002Fgithub.com\u002FHazyResearch\u002Flegalbench\u002Fissues\u002F25",[],[88,98,106,119,128,136],{"id":89,"name":90,"github_repo":91,"description_zh":92,"stars":93,"difficulty_score":41,"last_commit_at":94,"category_tags":95,"status":42},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,"2026-04-05T11:33:21",[96,97,39],"开发框架","Agent",{"id":99,"name":100,"github_repo":101,"description_zh":102,"stars":103,"difficulty_score":41,"last_commit_at":104,"category_tags":105,"status":42},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[96,39],{"id":107,"name":108,"github_repo":109,"description_zh":110,"stars":111,"difficulty_score":41,"last_commit_at":112,"category_tags":113,"status":42},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[114,115,116,117,97,40,39,96,118],"图像","数据工具","视频","插件","音频",{"id":120,"name":121,"github_repo":122,"description_zh":123,"stars":124,"difficulty_score":125,"last_commit_at":126,"category_tags":127,"status":42},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[97,114,96,39,40],{"id":129,"name":130,"github_repo":131,"description_zh":132,"stars":133,"difficulty_score":125,"last_commit_at":134,"category_tags":135,"status":42},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[39,114,96,40],{"id":137,"name":138,"github_repo":139,"description_zh":140,"stars":141,"difficulty_score":33,"last_commit_at":142,"category_tags":143,"status":42},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[96,40]]