[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Tiiiger--bert_score":3,"tool-Tiiiger--bert_score":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",157379,2,"2026-04-15T23:32:42",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":76,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":32,"env_os":97,"env_gpu":98,"env_ram":97,"env_deps":99,"category_tags":105,"github_topics":106,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":142},8013,"Tiiiger\u002Fbert_score","bert_score","BERT score for text generation","BERTScore 是一款基于预训练语言模型 BERT 的自动化文本评估工具，专为衡量文本生成质量而设计。在传统评估方法（如 BLEU 或 ROUGE）往往仅依赖词汇重叠度、难以捕捉语义细微差别的背景下，BERTScore 通过计算生成文本与参考文本在深层语义空间中的相似度，提供了更贴近人类判断的评分标准。\n\n这款工具特别适合自然语言处理领域的研究人员、算法工程师以及需要评估机器翻译、摘要生成或对话系统效果的开发者使用。其核心亮点在于利用上下文相关的词嵌入技术，不仅支持超过 130 种预训练模型（包括 DeBERTa、RoBERTa 等），还能根据具体任务灵活选择最佳模型以获得更高的人类评分相关性。例如，官方推荐使用 `microsoft\u002Fdeberta-xlarge-mnli` 模型来进一步提升评估准确度。此外，BERTScore 持续更新，兼容主流深度学习框架，并针对多语言场景提供了广泛支持，让跨语言文本评估变得更加便捷可靠。无论是学术研究还是工业界应用，BERTScore 都能帮助用户高效、客观地量化文本生成系统的性能表现。","# BERTScore\n[![made-with-python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMade%20with-Python-red.svg)](#python)\n[![arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-1904.09675-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09675)\n[![PyPI version bert-score](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fbert-score.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fbert-score\u002F) [![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTiiiger_bert_score_readme_10efdeec928e.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fbert-score) [![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTiiiger_bert_score_readme_10efdeec928e.png\u002Fmonth)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fbert-score) [![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) \n[![Code style: black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack) \n\n\nAutomatic Evaluation Metric described in the paper [BERTScore: Evaluating Text Generation with BERT](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09675) (ICLR 2020). We now support about 130 models (see this [spreadsheet](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing) for their correlations with human evaluation). Currently, the best model is `microsoft\u002Fdeberta-xlarge-mnli`, please consider using it instead of the default `roberta-large` in order to have the best correlation with human evaluation.\n\n#### News:\n\u003C!-- - Features to appear in the next version (currently in the master branch): -->\n- Updated to version 0.3.13\n  - Fix bug with transformers version > 4.17.0 ([#148](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F148))\n- Updated to version 0.3.12\n  - Having `get_idf_dict` compatible with DDP ([#140](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F140))\n  - Fix setup bug ([#138](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F138))\n- Updated to version 0.3.11\n  - Support 6 DeBERTa v3 models\n  - Support 3 ByT5 models\n- Updated to version 0.3.10\n  - Support 8 SimCSE models\n  - Fix the support of scibert (to be compatible with transformers >= 4.0.0)\n  - Add scripts for reproducing some results in our paper (See this [folder](.\u002Freproduce))\n  - Support fast tokenizers in huggingface transformers with `--use_fast_tokenizer`. Notably, you will get different scores because of the difference in the tokenizer implementations ([#106](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F106)). \n  - Fix non-zero recall problem for empty candidate strings ([#107](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F107)).\n  - Add Turkish BERT Supoort ([#108](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F108)).\n- Updated to version 0.3.9\n  - Support 3 BigBird models\n  - Fix bugs for mBART and T5\n  - Support 4 mT5 models as requested ([#93](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F93))\n- Updated to version 0.3.8\n  - Support 53 new pretrained models including BART, mBART, BORT, DeBERTa, T5, BERTweet, MPNet, ConvBERT, SqueezeBERT, SpanBERT, PEGASUS, Longformer, LED, Blendbot, etc. Among them, DeBERTa achives higher correlation with human scores than RoBERTa (our default) on WMT16 dataset. The correlations are presented in this [Google sheet](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing).\n  - Please consider using `--model_type microsoft\u002Fdeberta-xlarge-mnli` or `--model_type microsoft\u002Fdeberta-large-mnli` (faster) if you want the scores to correlate better with human scores.\n  - Add baseline files for DeBERTa models.\n  - Add example code to generate baseline files (please see the [details](get_rescale_baseline)).\n- Updated to version 0.3.7\n  - Being compatible with Huggingface's transformers version >=4.0.0. Thanks to public contributers ([#84](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F84), [#85](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F85), [#86](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F86)).\n- See [#22](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F22) if you want to replicate our experiments on the COCO Captioning dataset.\n\n\n- For people in China, downloading pre-trained weights can be very slow. We provide copies of a few models on Baidu Pan.\n  - [roberta-large](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1MTmGHsZ3ubn7Vr_W-wyEdQ) password: dhe5\n  - [bert-base-chinese](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1THfiCXjWtdGGsCMskQ5svA) password: jvk7\n  - [bert-base-multilingual-cased](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F100SBjkLmI7U4pgo_e0q7CQ) password: yx3q\n- [Huggingface's datasets](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdatasets) library includes BERTScore in their metric collection.\n\n\u003Cdetails>\u003Csummary>Previous updates\u003C\u002Fsummary>\u003Cp>\n\n- Updated to version 0.3.6\n  - Support custom baseline files [#74](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F74)\n  - The option `--rescale-with-baseline` is changed to `--rescale_with_baseline` so that it is consistent with other options.\n- Updated to version 0.3.5\n  - Being compatible with Huggingface's transformers >=v3.0.0 and minor fixes ([#58](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F58), [#66](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F66), [#68](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F68))\n  - Several improvements related to efficency ([#67](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F67), [#69](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F69))\n- Updated to version 0.3.4\n  - Compatible with transformers v2.11.0 now (#58)\n- Updated to version 0.3.3\n  - Fixing the bug with empty strings [issue #47](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F47).\n  - Supporting 6 [ELECTRA](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Felectra) models and 24 smaller [BERT](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert) models.\n  - A new [Google sheet](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing) for keeping the performance (i.e., pearson correlation with human judgment) of different models on WMT16 to-English.\n  - Including the script for tuning the best number of layers of an English pre-trained model on WMT16 to-English data (See the [details](tune_layers)).\n- Updated to version 0.3.2\n  - **Bug fixed**: fixing the bug in v0.3.1 when having multiple reference sentences.\n  - Supporting multiple reference sentences with our command line tool.\n- Updated to version 0.3.1\n  - A new `BERTScorer` object that caches the model to avoid re-loading it multiple times. Please see our [jupyter notebook example](.\u002Fexample\u002FDemo.ipynb) for the usage.\n  - Supporting multiple reference sentences for each example. The `score` function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.\n\n\u003C\u002Fp>\u003C\u002Fdetails>\n\nPlease see [release logs](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Freleases) for older updates.\n\n#### Authors:\n* [Tianyi Zhang](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=OI0HSa0AAAAJ&hl=en)*\n* [Varsha Kishore](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=B8UeYcEAAAAJ&authuser=2)*\n* [Felix Wu](https:\u002F\u002Fsites.google.com\u002Fview\u002Ffelixwu\u002Fhome)*\n* [Kilian Q. Weinberger](http:\u002F\u002Fkilian.cs.cornell.edu\u002Findex.html)\n* [Yoav Artzi](https:\u002F\u002Fyoavartzi.com\u002F)\n\n*: Equal Contribution\n\n### Overview\nBERTScore leverages the pre-trained contextual embeddings from BERT and matches\nwords in candidate and reference sentences by cosine similarity.\nIt has been shown to correlate with human judgment on sentence-level and\nsystem-level evaluation.\nMoreover, BERTScore computes precision, recall, and F1 measure, which can be\nuseful for evaluating different language generation tasks.\n\nFor an illustration, BERTScore recall can be computed as\n![](.\u002Fbert_score.png \"BERTScore\")\n\nIf you find this repo useful, please cite:\n```\n@inproceedings{bert-score,\n  title={BERTScore: Evaluating Text Generation with BERT},\n  author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},\n  booktitle={International Conference on Learning Representations},\n  year={2020},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=SkeHuCVFDr}\n}\n```\n\n### Installation\n* Python version >= 3.6\n* PyTorch version >= 1.0.0\n\nInstall from pypi with pip by \n\n```sh\npip install bert-score\n```\nInstall latest unstable version from the master branch on Github by:\n```\npip install git+https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\n```\n\nInstall it from the source by:\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\ncd bert_score\npip install .\n```\nand you may test your installation by:\n```\npython -m unittest discover\n```\n\n### Usage\n\n\n#### Python Function\n\nOn a high level, we provide a python function `bert_score.score` and a python object `bert_score.BERTScorer`.\nThe function provides all the supported features while the scorer object caches the BERT model to faciliate multiple evaluations.\nCheck our [demo](.\u002Fexample\u002FDemo.ipynb) to see how to use these two interfaces. \nPlease refer to [`bert_score\u002Fscore.py`](.\u002Fbert_score\u002Fscore.py) for implementation details.\n\nRunning BERTScore can be computationally intensive (because it uses BERT :p).\nTherefore, a GPU is usually necessary. If you don't have access to a GPU, you\ncan try our [demo on Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1kpL8Y_AnUUiCxFjhxSrxCsc6-sDMNb_Q)\n\n#### Command Line Interface (CLI)\nWe provide a command line interface (CLI) of BERTScore as well as a python module. \nFor the CLI, you can use it as follows:\n1. To evaluate English text files:\n\nWe provide example inputs under `.\u002Fexample`.\n\n```sh\nbert-score -r example\u002Frefs.txt -c example\u002Fhyps.txt --lang en\n```\nYou will get the following output at the end:\n\nroberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0) P: 0.957378 R: 0.961325 F1: 0.959333\n\nwhere \"roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)\" is the hash code.\n\nStarting from version 0.3.0, we support rescaling the scores with baseline scores\n\n```sh\nbert-score -r example\u002Frefs.txt -c example\u002Fhyps.txt --lang en --rescale_with_baseline\n```\nYou will get:\n\nroberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled P: 0.747044 R: 0.770484 F1: 0.759045 \n\nThis makes the range of the scores larger and more human-readable. Please see this [post](.\u002Fjournal\u002Frescale_baseline.md) for details.\n\nWhen having multiple reference sentences, please use\n```sh\nbert-score -r example\u002Frefs.txt example\u002Frefs2.txt -c example\u002Fhyps.txt --lang en\n```\nwhere the `-r` argument supports an arbitrary number of reference files. Each reference file should have the same number of lines as your candidate\u002Fhypothesis file. The i-th line in each reference file corresponds to the i-th line in the candidate file.\n\n\n2. To evaluate text files in other languages:\n\nWe currently support the 104 languages in multilingual BERT ([full list](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\u002Fblob\u002Fmaster\u002Fmultilingual.md#list-of-languages)).\n\nPlease specify the two-letter abbreviation of the language. For instance, using `--lang zh` for Chinese text. \n\nSee more options by `bert-score -h`.\n\n\n3. To load your own custom model:\nPlease specify the path to the model and the number of layers to use by `--model` and `--num_layers`.\n```sh\nbert-score -r example\u002Frefs.txt -c example\u002Fhyps.txt --model path_to_my_bert --num_layers 9\n```\n\n\n4. To visualize matching scores:\n```sh\nbert-score-show --lang en -r \"There are two bananas on the table.\" -c \"On the table are two apples.\" -f out.png\n```\nThe figure will be saved to out.png.\n\n5. If you see the following message while using BERTScore, please ignore it. This is expected.\n```\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.weight']\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n```\n\n#### Practical Tips\n\n* Report the hash code (e.g., `roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled`) in your paper so that people know what setting you use. This is inspired by [sacreBLEU](https:\u002F\u002Fgithub.com\u002Fmjpost\u002FsacreBLEU). Changes in huggingface's transformers version may also affect the score (See [issue #46](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F46)).\n* Unlike BERT, RoBERTa uses GPT2-style tokenizer which creates addition \" \" tokens when there are multiple spaces appearing together. It is recommended to remove addition spaces by `sent = re.sub(r' +', ' ', sent)` or `sent = re.sub(r'\\s+', ' ', sent)`.\n* Using inverse document frequency (idf) on the reference\n  sentences to weigh word importance  may correlate better with human judgment.\n  However, when the set of reference sentences become too small, the idf score \n  would become inaccurate\u002Finvalid.\n  We now make it optional. To use idf,\n  please set `--idf` when using the CLI tool or\n  `idf=True` when calling `bert_score.score` function.\n* When you are low on GPU memory, consider setting `batch_size` when calling\n  `bert_score.score` function.\n* To use a particular model please set `-m MODEL_TYPE` when using the CLI tool\n  or `model_type=MODEL_TYPE` when calling `bert_score.score` function. \n* We tune layer to use based on WMT16 metric evaluation dataset. You may use a\n  different layer by setting `-l LAYER` or `num_layers=LAYER`. To tune the best layer for your custom model, please follow the instructions in [tune_layers](tune_layers) folder.\n* __Limitation__: Because BERT, RoBERTa, and XLM with learned positional embeddings are pre-trained on sentences with max length 512, BERTScore is undefined between sentences longer than 510 (512 after adding \\[CLS\\] and \\[SEP\\] tokens). The sentences longer than this will be truncated. Please consider using XLNet which can support much longer inputs.\n\n### Default Behavior\n\n#### Default Model\n| Language  | Model                            |\n|:---------:|:--------------------------------:|\n| en        | roberta-large                    |\n| en-sci    | allenai\u002Fscibert_scivocab_uncased |\n| zh        | bert-base-chinese                |\n| tr        | dbmdz\u002Fbert-base-turkish-cased    |\n| others    | bert-base-multilingual-cased     |\n\n#### Default Layers\nPlease see this [Google sheet](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing) for the supported models and their performance.\n\n### Acknowledgement\nThis repo wouldn't be possible without the awesome\n[bert](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert), [fairseq](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ffairseq), and [transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers).\n","# BERTScore\n[![made-with-python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMade%20with-Python-red.svg)](#python)\n[![arxiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-1904.09675-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.09675)\n[![PyPI version bert-score](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fbert-score.svg)](https:\u002F\u002Fpypi.python.org\u002Fpypi\u002Fbert-score\u002F) [![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTiiiger_bert_score_readme_10efdeec928e.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fbert-score) [![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTiiiger_bert_score_readme_10efdeec928e.png\u002Fmonth)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fbert-score) [![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) \n[![Code style: black](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fcode%20style-black-000000.svg)](https:\u002F\u002Fgithub.com\u002Fpsf\u002Fblack) \n\n\n自动评估指标，如论文《BERTScore：用BERT评估文本生成》（ICLR 2020）中所述。目前我们支持约130种模型（请参阅此[电子表格](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing)，其中列出了它们与人工评价的相关性）。当前表现最佳的模型是`microsoft\u002Fdeberta-xlarge-mnli`，建议使用该模型代替默认的`roberta-large`，以获得与人工评价的最佳相关性。\n\n#### 最新消息：\n\u003C!-- - 即将在下一版本中推出的功能（目前位于master分支）： -->\n- 更新至0.3.13版本\n  - 修复了transformers版本>4.17.0时的bug ([#148](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F148))\n- 更新至0.3.12版本\n  - 使`get_idf_dict`与DDP兼容 ([#140](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F140))\n  - 修复了安装问题 ([#138](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F138))\n- 更新至0.3.11版本\n  - 支持6个DeBERTa v3模型\n  - 支持3个ByT5模型\n- 更新至0.3.10版本\n  - 支持8个SimCSE模型\n  - 修复了scibert的支持问题（使其与transformers >= 4.0.0兼容）\n  - 添加了用于复现我们论文中部分结果的脚本（详见此[文件夹](.\u002Freproduce)）\n  - 支持Hugging Face Transformers中的快速分词器，可通过`--use_fast_tokenizer`选项启用。需要注意的是，由于分词器实现的不同，您可能会得到不同的分数 ([#106](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F106))。\n  - 修复了空候选字符串时召回率不为零的问题 ([#107](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F107))。\n  - 增加了对土耳其语BERT的支持 ([#108](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F108))。\n- 更新至0.3.9版本\n  - 支持3个BigBird模型\n  - 修复了mBART和T5的若干bug\n  - 按照请求支持了4个mT5模型 ([#93](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F93))\n- 更新至0.3.8版本\n  - 支持53个新的预训练模型，包括BART、mBART、BORT、DeBERTa、T5、BERTweet、MPNet、ConvBERT、SqueezeBERT、SpanBERT、PEGASUS、Longformer、LED、Blendbot等。其中，DeBERTa在WMT16数据集上与人工评分的相关性高于我们的默认模型RoBERTa。相关性数据见此[Google表格](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing)。\n  - 如果希望得分与人工评分有更好的相关性，请考虑使用`--model_type microsoft\u002Fdeberta-xlarge-mnli`或`--model_type microsoft\u002Fdeberta-large-mnli`（速度更快）。\n  - 添加了DeBERTa模型的基准文件。\n  - 提供了生成基准文件的示例代码（详情请参阅`get_rescale_baseline`）。\n- 更新至0.3.7版本\n  - 与Hugging Face的transformers库v4.0.0及以上版本兼容。感谢各位社区贡献者([#84](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F84), [#85](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F85), [#86](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F86))。\n- 如需复现我们在COCO字幕数据集上的实验，请参阅[#22](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F22)。\n\n\n- 对于中国用户来说，下载预训练权重可能会非常缓慢。我们已在百度网盘上提供了部分模型的副本：\n  - [roberta-large](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1MTmGHsZ3ubn7Vr_W-wyEdQ) 提取码：dhe5\n  - [bert-base-chinese](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1THfiCXjWtdGGsCMskQ5svA) 提取码：jvk7\n  - [bert-base-multilingual-cased](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F100SBjkLmI7U4pgo_e0q7CQ) 提取码：yx3q\n- Hugging Face的`datasets`库已将BERTScore纳入其度量指标集合中。\n\n\u003Cdetails>\u003Csummary>往期更新\u003C\u002Fsummary>\u003Cp>\n\n- 更新至0.3.6版本\n  - 支持自定义基准文件 [#74](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F74)\n  - `--rescale-with-baseline`选项现已改为`--rescale_with_baseline`，以与其他选项保持一致。\n- 更新至0.3.5版本\n  - 与Hugging Face的transformers库v3.0.0及以上版本兼容，并进行了一些小修复 ([#58](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F58), [#66](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F66), [#68](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F68))。\n  - 进行了多项效率相关的改进 ([#67](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F67), [#69](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F69))。\n- 更新至0.3.4版本\n  - 现在与transformers v2.11.0兼容 (#58)。\n- 更新至0.3.3版本\n  - 修复了空字符串的bug [issue #47](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F47)。\n  - 支持6个[ELECTRA](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Felectra)模型和24个小型[BERT](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert)模型。\n  - 新增了一个[Google表格](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing)，用于记录不同模型在WMT16英译任务中与人工判断的皮尔逊相关性。\n  - 包含了针对WMT16英译数据调优英语预训练模型最佳层数的脚本（详情请参阅`tune_layers`）。\n- 更新至0.3.2版本\n  - **修复了bug**：解决了v0.3.1版本中处理多条参考句时出现的问题。\n  - 现在我们的命令行工具支持多条参考句。\n- 更新至0.3.1版本\n  - 引入了一个新的`BERTScorer`对象，可缓存模型以避免重复加载。使用方法请参阅我们的[Jupyter Notebook示例](.\u002Fexample\u002FDemo.ipynb)。\n  - 现在每个样本可以有多个参考句。`score`函数现在可以接受一个字符串列表的列表作为参考，并返回候选句与其最接近的参考句之间的分数。\n\n\u003C\u002Fp>\u003C\u002Fdetails>\n\n更多旧版本的更新信息，请参阅[发布日志](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Freleases)。\n\n#### 作者：\n* [张天一](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=OI0HSa0AAAAJ&hl=en)*\n* [瓦尔莎·基肖尔](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=B8UeYcEAAAAJ&authuser=2)*\n* [费利克斯·吴](https:\u002F\u002Fsites.google.com\u002Fview\u002Ffelixwu\u002Fhome)*\n* [基利安·Q·温伯格](http:\u002F\u002Fkilian.cs.cornell.edu\u002Findex.html)\n* [约阿夫·阿茨伊](https:\u002F\u002Fyoavartzi.com\u002F)\n\n*: 共同贡献\n\n### 概述\nBERTScore 利用 BERT 的预训练上下文嵌入，通过余弦相似度匹配候选句和参考句中的词语。研究表明，它与人类对句子级别和系统级别的评价具有相关性。此外，BERTScore 还可以计算精确率、召回率和 F1 分数，这些指标对于评估不同的语言生成任务非常有用。\n\n例如，BERTScore 的召回率可以这样计算：\n![](.\u002Fbert_score.png \"BERTScore\")\n\n如果您觉得这个仓库很有用，请引用以下文献：\n```\n@inproceedings{bert-score,\n  title={BERTScore: Evaluating Text Generation with BERT},\n  author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},\n  booktitle={International Conference on Learning Representations},\n  year={2020},\n  url={https:\u002F\u002Fopenreview.net\u002Fforum?id=SkeHuCVFDr}\n}\n```\n\n### 安装\n* Python 版本 >= 3.6\n* PyTorch 版本 >= 1.0.0\n\n可以通过 pip 从 pypi 安装：\n\n```sh\npip install bert-score\n```\n\n也可以从 GitHub 主分支安装最新的不稳定版本：\n\n```sh\npip install git+https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\n```\n\n或者从源代码安装：\n\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\ncd bert_score\npip install .\n```\n\n安装完成后，您可以运行以下命令来测试安装是否成功：\n\n```sh\npython -m unittest discover\n```\n\n### 使用方法\n\n\n#### Python 函数\n\n总体来说，我们提供了一个 Python 函数 `bert_score.score` 和一个 Python 对象 `bert_score.BERTScorer`。函数提供了所有支持的功能，而评分器对象则会缓存 BERT 模型，以便进行多次评估。请查看我们的 [示例](.\u002Fexample\u002FDemo.ipynb)，了解如何使用这两个接口。实现细节请参阅 [`bert_score\u002Fscore.py`](.\u002Fbert_score\u002Fscore.py)。\n\n运行 BERTScore 可能会非常消耗计算资源（因为它使用了 BERT :p）。因此，通常需要 GPU。如果您没有 GPU，可以尝试我们在 [Google Colab](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1kpL8Y_AnUUiCxFjhxSrxCsc6-sDMNb_Q) 上的演示。\n\n#### 命令行界面 (CLI)\n我们不仅提供了 Python 模块，还提供了一个 BERTScore 的命令行界面。以下是 CLI 的使用方法：\n\n1. 评估英文文本文件：\n\n我们提供了一些示例输入文件在 `.\u002Fexample` 目录下。\n\n```sh\nbert-score -r example\u002Frefs.txt -c example\u002Fhyps.txt --lang en\n```\n\n最终您将得到如下输出：\n\nroberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0) P: 0.957378 R: 0.961325 F1: 0.959333\n\n其中“roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)”是哈希码。\n\n从 0.3.0 版本开始，我们支持使用基准分数对得分进行重新缩放：\n\n```sh\nbert-score -r example\u002Frefs.txt -c example\u002Fhyps.txt --lang en --rescale_with_baseline\n```\n\n您将得到：\n\nroberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled P: 0.747044 R: 0.770484 F1: 0.759045 \n\n这使得得分范围更大，更易于理解。详细信息请参阅这篇 [文章](.\u002Fjournal\u002Frescale_baseline.md)。\n\n如果有多个参考句，请使用以下命令：\n\n```sh\nbert-score -r example\u002Frefs.txt example\u002Frefs2.txt -c example\u002Fhyps.txt --lang en\n```\n\n其中 `-r` 参数可以接受任意数量的参考文件。每个参考文件的行数应与候选句文件相同。每个参考文件的第 i 行对应于候选句文件的第 i 行。\n\n2. 评估其他语言的文本文件：\n\n我们目前支持多语言 BERT 中的 104 种语言（[完整列表](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert\u002Fblob\u002Fmaster\u002Fmultilingual.md#list-of-languages)）。\n\n请指定该语言的双字母缩写。例如，对于中文文本，使用 `--lang zh`。\n\n更多选项请使用 `bert-score -h` 查看。\n\n3. 加载自定义模型：\n请通过 `--model` 和 `--num_layers` 指定模型路径和要使用的层数。\n\n```sh\nbert-score -r example\u002Frefs.txt -c example\u002Fhyps.txt --model path_to_my_bert --num_layers 9\n```\n\n4. 可视化匹配得分：\n```sh\nbert-score-show --lang en -r \"There are two bananas on the table.\" -c \"On the table are two apples.\" -f out.png\n```\n\n生成的图像将保存为 out.png。\n\n5. 如果在使用 BERTScore 时看到以下提示，请忽略。这是正常现象：\n\n```\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.weight']\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n```\n\n#### 实用技巧\n\n* 在论文中报告哈希码（例如：`roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled`），以便他人了解您使用的设置。这一做法受到 [sacreBLEU](https:\u002F\u002Fgithub.com\u002Fmjpost\u002FsacreBLEU) 的启发。Hugging Face 的 transformers 版本变化也可能影响得分（参见 [issue #46](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F46)）。\n* 与 BERT 不同，RoBERTa 使用 GPT2 风格的分词器，在连续出现多个空格时会生成额外的空格标记。建议使用 `sent = re.sub(r' +', ' ', sent)` 或 `sent = re.sub(r'\\s+', ' ', sent)` 来去除多余的空格。\n* 在参考句上使用逆文档频率（idf）来权衡单词的重要性，可能会与人类判断更加一致。然而，当参考句集过小时，idf 得分可能会变得不准确或无效。我们现在将其设为可选。如果要使用 idf，请在使用 CLI 工具时设置 `--idf`，或在调用 `bert_score.score` 函数时设置 `idf=True`。\n* 当 GPU 内存不足时，可以在调用 `bert_score.score` 函数时设置 `batch_size`。\n* 如果要使用特定模型，请在使用 CLI 工具时设置 `-m MODEL_TYPE`，或在调用 `bert_score.score` 函数时设置 `model_type=MODEL_TYPE`。\n* 我们根据 WMT16 评测数据集调整了使用的层数。您也可以通过设置 `-l LAYER` 或 `num_layers=LAYER` 来选择不同的层。要为您的自定义模型找到最佳层数，请按照 [tune_layers](tune_layers) 文件夹中的说明操作。\n* __局限性__：由于 BERT、RoBERTa 和 XLM 等带有学习型位置嵌入的模型是在最大长度为 512 的句子上预训练的，因此 BERTScore 对于超过 510 个 token 的句子（加上 [CLS] 和 [SEP] 标记后为 512 个 token）是未定义的。超过此长度的句子会被截断。请考虑使用 XLNet，它可以支持更长的输入。\n\n### 默认行为\n\n#### 默认模型\n| 语言  | 模型                            |\n|:---------:|:--------------------------------:|\n| en        | roberta-large                    |\n| en-sci    | allenai\u002Fscibert_scivocab_uncased |\n| zh        | bert-base-chinese                |\n| tr        | dbmdz\u002Fbert-base-turkish-cased    |\n| 其他    | bert-base-multilingual-cased     |\n\n#### 默认层\n请参阅此 [Google 表格](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing)，以了解支持的模型及其性能。\n\n### 致谢\n如果没有出色的 [bert](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert)、[fairseq](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Ffairseq) 和 [transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)，这个仓库将无法实现。","# BERTScore 快速上手指南\n\nBERTScore 是一种基于预训练语言模型（如 BERT）的自动文本评估指标，通过计算候选句与参考句之间词嵌入的余弦相似度来衡量生成文本的质量。它在句子级和系统级评估中均表现出与人类判断的高度相关性，并支持精确率、召回率和 F1 值的计算。\n\n## 环境准备\n\n- **操作系统**：Linux \u002F macOS \u002F Windows\n- **Python 版本**：>= 3.6\n- **PyTorch 版本**：>= 1.0.0\n- **依赖库**：`transformers`, `numpy`, `torch` 等（安装时会自动解决）\n\n> 💡 提示：运行 BERTScore 计算量较大，建议使用 GPU 加速。若无 GPU，可尝试 [Google Colab 演示](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1kpL8Y_AnUUiCxFjhxSrxCsc6-sDMNb_Q)。\n\n## 安装步骤\n\n### 方式一：通过 PyPI 安装（推荐）\n\n```sh\npip install bert-score\n```\n\n### 方式二：从源码安装（获取最新功能）\n\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\ncd bert_score\npip install .\n```\n\n### 国内用户加速建议\n\n由于下载预训练模型较慢，中国用户可从百度网盘手动下载部分模型权重：\n\n- [roberta-large](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1MTmGHsZ3ubn7Vr_W-wyEdQ) 密码：`dhe5`\n- [bert-base-chinese](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1THfiCXjWtdGGsCMskQ5svA) 密码：`jvk7`\n- [bert-base-multilingual-cased](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F100SBjkLmI7U4pgo_e0q7CQ) 密码：`yx3q`\n\n下载后可将模型放置于本地路径，并通过 `--model` 参数指定使用。\n\n## 基本使用\n\n### 方法一：命令行界面（CLI）\n\n#### 评估英文文本\n\n准备两个文件：\n- `refs.txt`：每行为一条参考句子\n- `hyps.txt`：每行为一条候选句子\n\n执行命令：\n\n```sh\nbert-score -r refs.txt -c hyps.txt --lang en\n```\n\n输出示例：\n```\nroberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0) P: 0.957378 R: 0.961325 F1: 0.959333\n```\n\n#### 启用分数重缩放（更易读）\n\n```sh\nbert-score -r refs.txt -c hyps.txt --lang en --rescale_with_baseline\n```\n\n#### 支持多条参考句\n\n```sh\nbert-score -r refs1.txt refs2.txt -c hyps.txt --lang en\n```\n\n#### 评估中文文本\n\n```sh\nbert-score -r refs_zh.txt -c hyps_zh.txt --lang zh\n```\n\n#### 可视化词语匹配关系\n\n```sh\nbert-score-show --lang en -r \"There are two bananas on the table.\" -c \"On the table are two apples.\" -f match.png\n```\n\n### 方法二：Python API 调用\n\n```python\nfrom bert_score import score\n\ncandidates = [\"On the table are two apples.\"]\nreferences = [\"There are two bananas on the table.\"]\n\nP, R, F1 = score(candidates, references, lang=\"en\", verbose=True)\nprint(f\"P: {P.mean():.4f}, R: {R.mean():.4f}, F1: {F1.mean():.4f}\")\n```\n\n如需多次评估，推荐使用 `BERTScorer` 对象以缓存模型：\n\n```python\nfrom bert_score import BERTScorer\n\nscorer = BERTScorer(lang=\"en\")\nP, R, F1 = scorer.score(candidates, references)\n```\n\n> ✅ 推荐模型：为获得更高的人类评分相关性，建议使用 `microsoft\u002Fdeberta-xlarge-mnli` 替代默认的 `roberta-large`：\n> ```sh\n> bert-score -r refs.txt -c hyps.txt --lang en --model_type microsoft\u002Fdeberta-xlarge-mnli\n> ```","某电商团队正在迭代智能客服系统，需要频繁评估模型生成的回复是否自然且准确，以优化用户体验。\n\n### 没有 bert_score 时\n- 依赖 BLEU 或 ROUGE 等传统指标，仅通过字面重合度打分，导致语义相同但措辞不同的优质回复被误判为低分。\n- 人工抽检耗时费力，标注团队每天需花费数小时逐条阅读对话日志，难以覆盖全量数据，反馈周期长达数天。\n- 模型迭代方向模糊，开发人员无法量化“语义相似度”的提升，只能凭感觉调整参数，常出现指标上升但用户满意度下降的错位。\n- 多语言场景下缺乏统一标准，不同语种的回复质量难以横向对比，导致小语种服务优化滞后。\n\n### 使用 bert_score 后\n- 利用 BERT 深层语义嵌入计算相似度，精准识别出“请稍等”与“马上为您处理”这类语义一致的表达，评估结果与人类直觉高度吻合。\n- 实现自动化批量评分，分钟级完成数万条对话的质量分析，将版本验证周期从三天缩短至两小时，大幅释放人力。\n- 提供精确的量化依据，团队能清晰看到更换 `deberta-xlarge` 模型后语义得分的具体提升，从而自信地推进架构升级。\n- 支持全球 130+ 种预训练模型，轻松切换至对应语种的 BERT 变体，确保中文、英文及小语种回复均在同一语义维度下公平评估。\n\nbert_score 通过将文本评估从“字面匹配”升级为“语义理解”，成为了连接算法指标与真实用户体验的关键桥梁。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTiiiger_bert_score_efeb3f72.png","Tiiiger","Tianyi","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FTiiiger_91f9cb9f.jpg","To be determined ...",null,"San Francisco, CA","https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Ftianyi-zhang-843a64120\u002F","https:\u002F\u002Fgithub.com\u002FTiiiger",[81,85,89],{"name":82,"color":83,"percentage":84},"Jupyter Notebook","#DA5B0B",80.1,{"name":86,"color":87,"percentage":88},"Python","#3572A5",19.5,{"name":90,"color":91,"percentage":92},"Shell","#89e051",0.3,1888,238,"2026-04-15T18:03:34","MIT","未说明","强烈建议使用 GPU（因为计算密集），但未指定具体型号、显存大小或 CUDA 版本",{"notes":100,"python":101,"dependencies":102},"该工具计算密集型，通常必须使用 GPU。支持约 130 种预训练模型（如 RoBERTa, DeBERTa, BART 等），首次运行需下载模型权重。中国用户下载预训练权重可能较慢，README 提供了百度网盘链接。若使用 fast tokenizer 可能会导致分数差异。",">=3.6",[103,104],"torch>=1.0.0","transformers",[35,14],[107,108],"natural-language-processing","machine-learning","2026-03-27T02:49:30.150509","2026-04-16T15:50:52.151885",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},35884,"当候选句子和参考句子的数量不一致时，为什么会报错？如何解决？","BertScore 计算的是两个字符串之间的相似度。调用 `score(cands, refs, ...)` 时，它会计算 `cand[i]` 和 `ref[i]` 之间的分数。如果您想计算包含多个句子的段落之间的 BertScore，需要先将所有句子拼接成一个完整的字符串，让 `cand[i]` 和 `ref[i]` 分别代表整个段落，而不是单独的句子列表。","https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F111",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},35885,"运行代码速度非常慢，有什么优化建议吗？","如果您需要重复调用评分函数，建议使用 `Scorer` 对象。直接使用 `score` 函数会在每次调用时重新加载模型（虽然不会重新下载），这在 GPU 上会产生显著开销。使用 `Scorer` 可以缓存模型，避免重复加载，从而大幅提升速度。","https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F3",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},35886,"为什么手动计算的 F1 分数与函数返回的 F1 分数不一致？","这通常是因为使用了 `rescale_with_baseline=True` 参数。在这种情况下，原始的 P（精确率）和 R（召回率）会先根据各自的基线分数进行独立重缩放，然后再计算 F1 分数；或者先计算原始 F1 再基于 F1 基线重缩放。因此，直接对重缩放后的 P 和 R 使用标准公式 `(2 * P * R \u002F (P + R))` 计算出的结果会与库返回的重缩放后的 F1 不同。若需复现结果，请勿手动计算公式，直接使用库返回的 F 值。","https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F46",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},35887,"BertScore 出现负值的含义是什么？","负值表示您得到的相似度低于两个随机句子之间的相似度。在重缩放（rescaled）模式下，0 分通常代表随机句子对的相似度水平。如果您的 BLEU 分数也接近零，出现负的 BertScore 是符合预期的，表明生成文本与参考文本几乎没有语义关联。","https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F152",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},35888,"如何获取论文中使用的机器翻译或图像描述评估数据集（含人工评分）？","仓库不提供具体的评估数据集（如 WMT18）。论文中提到的数据集（如 WMT18 metric evaluation dataset）应由相关竞赛的组织者提供。建议直接联系 WMT 等竞赛的组织方或查阅其官方文档以获取包含预测结果、黄金参考和人工评分的完整数据，本仓库仅专注于提供评分代码支持。","https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F79",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},35889,"BertScore 是否支持跨语言评估（例如对比英文和德文句子）？","理论上，如果使用的 BERT 模型（如 multilingual BERT）学习了联合语言表示，使得不同语言中语义相似的词向量距离较近，那么 BertScore 可以用于评估不同语言句子之间的相似度。但这取决于所选模型是否具备强大的跨语言对齐能力（类似 MUSE 或 LASER 模型的效果）。","https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F7",[143,148,153,158,163,168,173,178,183,188,193,198,203,208,213,218,223,228],{"id":144,"version":145,"summary_zh":146,"released_at":147},281125,"v0.3.13","- 修复 transformers 版本 > 4.17.0 的 bug ([#148](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F148))","2023-02-20T21:05:39",{"id":149,"version":150,"summary_zh":151,"released_at":152},281126,"v0.3.12","- 使 `get_idf_dict` 兼容 DDP ([#140](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F140))\n- 修复安装错误 ([#138](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F138))","2022-10-14T21:46:13",{"id":154,"version":155,"summary_zh":156,"released_at":157},281127,"v0.3.11","- 更新至 0.3.11 版本\n  - 支持 6 个 DeBERTa v3 模型\n  - 支持 3 个 ByT5 模型","2021-12-10T22:45:27",{"id":159,"version":160,"summary_zh":161,"released_at":162},281128,"v0.3.10","- 更新至 0.3.10 版本\n  - 支持 8 种 SimCSE 模型\n  - 修复 SciBERT 的支持问题（使其与 transformers >= 4.0.0 兼容）\n  - 添加用于复现论文中部分结果的脚本（详见此 [文件夹](.\u002Freproduce)）\n  - 通过 `--use_fast_tokenizer` 参数支持 Hugging Face Transformers 中的快速分词器。需要注意的是，由于分词器实现的不同，得分可能会有所差异（[#106](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F106)）。\n  - 修复候选字符串为空时召回率不为零的问题（[#107](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F107)）。\n  - 增加对土耳其语 BERT 的支持（[#108](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F108)）。","2021-08-05T04:54:46",{"id":164,"version":165,"summary_zh":166,"released_at":167},281129,"v0.3.9","- 支持 3 个 BigBird 模型\n- 修复 mBART 和 T5 的 bug\n- 按需求支持 4 个 mT5 模型（[#93](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F93)）\n","2021-04-17T18:45:16",{"id":169,"version":170,"summary_zh":171,"released_at":172},281130,"v0.3.8","- 支持53个新的预训练模型，包括BART、mBART、BORT、DeBERTa、T5、mT5、BERTweet、MPNet、ConvBERT、SqueezeBERT、SpanBERT、PEGASUS、Longformer、LED、Blendbot等。其中，在WMT16数据集上，DeBERTa与人工评分的相关性高于RoBERTa（我们的默认模型）。相关性结果见此[Google表格](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing)。\n- 如果希望评分与人工评分有更好的相关性，请考虑使用`--model_type microsoft\u002Fdeberta-xlarge-mnli`或`--model_type microsoft\u002Fdeberta-large-mnli`（速度更快）。\n- 添加DeBERTa模型的基线文件。\n- 添加生成基线文件的示例代码（请参阅[详情](get_rescale_baseline)）。","2021-03-03T18:17:21",{"id":174,"version":175,"summary_zh":176,"released_at":177},281131,"v0.3.7","已更新至 0.3.7 版本  \n- 现在兼容 Hugging Face 的 transformers 库 >=4.0.0 版本。感谢各位社区贡献者（[#84](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F84)、[#85](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F85)、[#86](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F86)）。","2020-12-06T22:38:06",{"id":179,"version":180,"summary_zh":181,"released_at":182},281132,"v0.3.6","更新至版本 0.3.6  \n  - 支持自定义基准文件 [#74](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F74)  \n  - 将选项 `--rescale-with-baseline` 改为 `--rescale_with_baseline`，使其与其他选项保持一致。","2020-09-03T19:40:29",{"id":184,"version":185,"summary_zh":186,"released_at":187},281133,"v0.3.5","已更新至 0.3.5 版本  \n- 兼容 Hugging Face 的 transformers >= v3.0.0，并进行了一些小修复（[#58](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F58)、[#66](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F66)、[#68](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F68)）  \n- 多项性能优化（[#67](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F67)、[#69](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fpull\u002F69)）","2020-07-17T20:23:37",{"id":189,"version":190,"summary_zh":191,"released_at":192},281134,"v0.3.4","- 升级以兼容 transformers v2.11.0 (#58)\n- 修复 bug：支持 CPU 推理 (#50)","2020-06-10T15:32:43",{"id":194,"version":195,"summary_zh":196,"released_at":197},281135,"v0.3.3","- Fixing the bug with empty strings [issue #47](https:\u002F\u002Fgithub.com\u002FTiiiger\u002Fbert_score\u002Fissues\u002F47).\r\n- Supporting 6 [ELECTRA](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Felectra) models and 24 smaller [BERT](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fbert) models.\r\n- A new [Google sheet](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI\u002Fedit?usp=sharing) for keeping the performance (i.e., pearson correlation with human judgment) of different models on WMT16 to-English.\r\n- Including the script for tuning the best number of layers of an English pre-trained model on WMT16 to-English data (See the [details](tune_layers)).","2020-05-10T22:23:30",{"id":199,"version":200,"summary_zh":201,"released_at":202},281136,"v0.3.2","- **Bug fixed**: fixing the bug in v0.3.1 when having multiple reference sentences.\r\n- Supporting multiple reference sentences with our command-line tool","2020-04-18T17:04:54",{"id":204,"version":205,"summary_zh":206,"released_at":207},281137,"v0.3.1","- A new `BERTScorer` object that caches the model to avoid re-loading it multiple times. Please see our [jupyter notebook example](.\u002Fexample\u002FDemo.ipynb) for the usage.\r\n- Supporting multiple reference sentences for each example. The `score` function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.","2020-04-18T17:09:21",{"id":209,"version":210,"summary_zh":211,"released_at":212},281138,"v0.3.0","- Supporting *Baseline Rescaling*: we apply a simple linear transformation to enhance the readability of BERTscore using pre-computed \"baselines\". It has been pointed out (e.g. by #20, #23) that the numerical range of BERTScore is exceedingly small when computed with RoBERTa models. In other words, although BERTScore correctly distinguishes examples through ranking, the numerical scores of good and bad examples are very similar. We detail our approach in [a separate post](.\u002Fjournal\u002Frescale_baseline.md).","2020-04-18T17:11:28",{"id":214,"version":215,"summary_zh":216,"released_at":217},281139,"v0.2.3","- Supporting DistilBERT (Sanh et al.), ALBERT (Lan et al.), and XLM-R (Conneau et al.) models.\r\n- Including the version of huggingface's transformers in the hash code for reproducibility\r\n","2020-04-18T17:17:46",{"id":219,"version":220,"summary_zh":221,"released_at":222},281140,"v0.2.2","- **Bug fixed**: when using RoBERTaTokenizer, we now set `add_prefix_space=True` which was the default setting in huggingface's `pytorch_transformers` (when we ran the experiments in the paper) before they migrated it to `transformers`. This breaking change in `transformers` leads to a lower correlation with human evaluation. To reproduce our RoBERTa results in the paper, please use version `0.2.2`.\r\n- The best number of layers for DistilRoBERTa is included\r\n- Supporting loading a custom model","2020-04-18T17:18:38",{"id":224,"version":225,"summary_zh":226,"released_at":227},281141,"v0.2.1","- [SciBERT](https:\u002F\u002Fgithub.com\u002Fallenai\u002Fscibert) (Beltagy et al.) models are now included. Thanks to AI2 for sharing the models. By default, we use the 9th layer (the same as BERT-base), but this is not tuned. ","2020-04-18T17:19:23",{"id":229,"version":230,"summary_zh":231,"released_at":232},281142,"v0.2.0","- Supporting BERT, XLM, XLNet, and RoBERTa models using [huggingface's Transformers library](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers)\r\n- Automatically picking the best model for a given language\r\n- Automatically picking the layer based on a model\r\n- IDF is *not* set as default as we show in the new version that the improvement brought by importance weighting is not consistent","2020-04-18T17:20:32"]