[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-prometheus-eval--prometheus-eval":3,"tool-prometheus-eval--prometheus-eval":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":67,"owner_name":67,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":77,"owner_website":79,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":104,"env_deps":105,"category_tags":113,"github_topics":114,"view_count":10,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":159},416,"prometheus-eval\u002Fprometheus-eval","prometheus-eval","Evaluate your LLM's response with Prometheus and GPT4 💯","Prometheus-eval 是一个用于评估大语言模型（LLM）生成结果质量的开源工具，核心目标是提供一种可靠、可复现且无需依赖闭源模型（如 GPT-4）的自动评估方案。它通过专门训练的“裁判模型”（如 Prometheus 2 和最新的 M-Prometheus 系列）对 LLM 的回答进行打分或排序，支持绝对评分和相对比较两种模式，在多个公开基准测试中展现出与人类判断高度一致的结果。\n\n这一工具主要解决的是当前 LLM 评估过度依赖商业 API 或人工标注的问题，既降低了成本，又提升了评估的透明度和可控性。Prometheus-eval 特别适合 AI 领域的研究人员和开发者使用，尤其是那些需要系统性评测模型性能、优化提示策略或构建多语言生成系统的团队。其技术亮点包括在多语言场景下的优异表现（如文学翻译评估）、基于大规模高质量数据集（如 BiGGen-Bench）训练的裁判模型，以及在开源模型中领先的评估准确性——部分版本甚至超越了 Claude-3-Opus 等顶尖闭源模型在特定任务上的表现。","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_1282f080094c.png\" alt=\"Prometheus-Logo\" style=\"width: 15%; display: block; margin: auto;\">\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">🔥 Prometheus-Eval 🔥\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01535\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2405.01535-b31b1b.svg\" alt=\"arXiv\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fprometheus-eval\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHugging%20Face-Organization-ff9d00\" alt=\"Hugging Face Organization\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fblob\u002Fmain\u002FLICENSE\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fprometheus-eval\u002Fprometheus-eval.svg\" alt=\"License\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fprometheus-eval\u002F\">\u003Cimg src=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fprometheus-eval.svg\" alt=\"PyPI version\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  ⚡ A repository for evaluating LLMs in generation tasks 🚀 ⚡ \u003Cbr>\n\u003C\u002Fp>\n\n\n**Latest News** 🔥\n\n- [2025\u002F04] We release the latest iteration of Prometheus:  **[M-Promethues (3B, 7B, & 14B)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FUnbabel\u002Fm-prometheus-67f3b17e6409b2550b698822)!**\n\n  - They outperform previous open LLM judges on multilingual meta-evaluation benchmarks ([MM-Eval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17578) and [M-RewardBench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15522)), and achieves exceptional results on [literary translation evaluation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18697).\n  - The models also perform strongly in English, with the 7B and 14B models surpassing Prometheus 2 7B and 8x7B on [RewardBench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13787), respectively.\n  - When used as judges at inference time, they significantly boost multilingual generation quality.\n  - Checkout our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.04953), where we present extensive ablations to uncover the key factors behind effective multilingual judge training.\n\n- [2024\u002F06] We release the **BiGGen-Bench** and **Prometheus 2 BGB (8x7B)**!\n\n  - BiGGen-Bench features 9 core capabilities, 77 tasks, and 765 meticulously crafted instances, each with specific evaluation criteria.\n  - We evaluated 103 frontier language models by 5 state-of-the-art evaluator language models and analyzed the findings in our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05761).\n  - We continually trained Prometheus 2 8x7B on BiGGen-Bench evaluation trace and built our most capable evaluator LM [Prometheus 2 BGB](https:\u002F\u002Fhuggingface.co\u002Fprometheus-eval\u002Fprometheus-bgb-8x7b-v2.0), even surpassing Claude-3-Opus on absolute grading tasks.\n  - Checkout our [dataset](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fprometheus-eval\u002FBiGGen-Bench), [evaluation results](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fprometheus-eval\u002FBiGGen-Bench-Results), [leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fprometheus-eval\u002FBiGGen-Bench-Leaderboard), [interactive report](https:\u002F\u002Fhub.zenoml.com\u002Fproject\u002Fc84cfca5-71c9-4f89-aa0e-218c65c821e4\u002FBiGGen\\%20Bench\\%20Results), and the [code](https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Ftree\u002Fmain\u002FBiGGen-Bench)!\n\n- [2024\u002F05] We release Prometheus 2 (7B & 8x7B) models!\n\n  - **Prometheus 2 (8x7B)** is an open-source state-of-the-art evaluator language model!\n    - Compared to Prometheus 1 (13B), Prometheus 2 (8x7B) shows improved evaluation performances & supports assessing in pairwise ranking (relative grading) formats as well!\n    - It achieves a Pearson correlation of 0.6 to 0.7 with GPT-4-1106 on a 5-point Likert scale across multiple direct assessment benchmarks, including [VicunaBench](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u002Ftree\u002Fmain\u002Ffastchat\u002Fllm_judge\u002Fdata\u002Fvicuna_bench), [MT-Bench](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u002Ftree\u002Fmain\u002Ffastchat\u002Fllm_judge\u002Fdata\u002Fmt_bench), and [FLASK](https:\u002F\u002Fgithub.com\u002FkaistAI\u002FFLASK). \n    - It also scores a 72% to 85% agreement with human judgments across multiple pairwise ranking benchmarks, including [HHH Alignment](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FBIG-bench\u002Ftree\u002Fmain\u002Fbigbench\u002Fbenchmark_tasks\u002Fhhh_alignment), [MT Bench Human Judgment](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Flmsys\u002Fmt_bench_human_judgments), and [Auto-J Eval](https:\u002F\u002Fgithub.com\u002FGAIR-NLP\u002Fauto-j\u002Fblob\u002Fmain\u002Fdata\u002Ftest\u002Ftestdata_pairwise.jsonl). \n\n  - **Prometheus 2 (7B)** is a lighter version of Prometheus 2 (8x7B) model with reasonable performances (outperforming Llama-2-70B \\& on par with Mixtral-8x7B). \n    - It achieves at least 80% of the evaluation statistics or performances of Prometheus 2 (8x7B) \n    - It requires only 16 GB of VRAM, making it suitable for running on consumer GPUs.\n\n## 🔧 Installation\n\nInstallation with pip:\n\n```shell\npip install prometheus-eval\n```\n\nPrometheus-Eval supports local inference through `vllm` and inference through LLM APIs with the help of `litellm`. \n\n### Local Inference\nInstall `vllm` if you want to run Prometheus in your local environment.\n\n```shell\npip install vllm\n```\n\n### LLM APIs\n\nIf you're interested in:\n1. Utilizing the Prometheus interface through the VLLM endpoint, Huggingface TGI, or other platforms\n2. Leveraging more powerful evaluator LLMs such as GPT-4\n\nYou can also take advantage of Prometheus-Eval! For installation details for various providers, please refer to the [LiteLLM Provider Docs](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders).\n\n\n```python\nfrom prometheus_eval.litellm import LiteLLM, AsyncLiteLLM\n\nmodel = LiteLLM('openai\u002Fprometheus-eval\u002Fprometheus-7b-v2.0') # VLLM endpoint\nmodel = LiteLLM('huggingface\u002Fprometheus-eval\u002Fprometheus-7b-v2.0') # Huggingface TGI\nmodel = AsyncLiteLLM('gpt-4-turbo', requests_per_minute=100) # GPT-4 API (async generation considering rate limit)\n# And so much more!\n\njudge = PrometheusEval(model=model)\n```\n\n\n## ⏩ Quick Start\n\n*Note*: `prometheus-eval` library is currently in the beta stage. If you encounter any issues, please let us know by creating an issue on the repository.\n\n\n> **With `prometheus-eval`, evaluating *any* instruction and response pair is as simple as:**\n\n```python\n# Absolute Grading: Outputs score of 1 to 5\n\nfrom prometheus_eval.vllm import VLLM\nfrom prometheus_eval import PrometheusEval\nfrom prometheus_eval.prompts import ABSOLUTE_PROMPT, SCORE_RUBRIC_TEMPLATE\n\nmodel = VLLM(model=\"prometheus-eval\u002Fprometheus-7b-v2.0\")\njudge = PrometheusEval(model=model, absolute_grade_template=ABSOLUTE_PROMPT)\n\ninstruction = \"Struggling with a recent break-up, a person opens up about the intense feelings of loneliness and sadness. They ask for advice on how to cope with the heartbreak and move forward in life.\",\nresponse = \"I'm genuinely sorry to hear about your break-up. This can be an immensely challenging time filled with intense emotions of loneliness and sorrow. It's important to remember that it's normal to experience these feelings; they are a testament to the deep connection you once had. Firstly, don't hesitate to embrace your emotions, let them flow freely. Ignoring them often complicates the healing process. It's natural to be upset, to shed tears, or to feel angry. This is all part of the journey towards healing. Self-care can be a healing balm in these tough times. This could mean indulging in your favourite book, journaling your thoughts, or even just a quiet cup of tea. Small acts of kindness to oneself can significantly improve your emotional well-being. Also, don't isolate yourself. Surround yourself with supportive individuals who can offer comfort and understanding. This might include friends, family, or even a mental health professional. They can provide much-needed emotional support and strategies to help you cope. Lastly, remember there's no set timeline for healing. It's different for everyone and it's okay to take as long as you need. Keep your focus on the future and the potential it holds. Though it may not seem possible now, you will emerge stronger from this experience. It's okay to ask for help, it's okay to grieve, and it's okay to take it one day at a time. You're not alone in this journey, and there are people ready and willing to help. It's okay to heal at your own pace.\",\nreference_answer = \"I can only imagine how difficult this time must be for you. Heartbreak can feel like an overwhelming wave that washes over you, making you feel lost and alone. It's important to remember that it's okay to feel this way; your feelings are valid and it's natural to grieve the loss of a relationship. Firstly, it's crucial to allow yourself to feel the emotions fully. Suppressing or denying them often only prolongs the healing process. It's okay to cry, to feel sad, or to be angry. These are all part of the healing journey. Engaging in self-care activities can also provide a much-needed boost. This could be anything from taking a warm bath, writing in a journal, or simply drinking a cup of your favorite tea. Small acts of self-love can make a big difference in how you feel. Next, try to surround yourself with supportive people who understand your situation and provide comfort. Friends and family can be a great source of strength in times of heartbreak. If you feel comfortable, you might want to consider seeking professional help. Therapists and counselors are trained to provide assistance and tools to navigate through difficult times like these. Lastly, it's important to remember that it's okay to take your time to heal. Everyone has their own pace and there's no rush. Try to focus on the future and the possibilities it holds. While it may not seem like it now, you will come out stronger and more resilient from this experience. Remember, it's okay to ask for help and it's okay to feel the way you feel. You are not alone in this journey and there are people who care about you and want to help. It's okay to take one day at a time. Healing is a process, and it's okay to move through it at your own pace.\",\n\nrubric_data = {\n  \"criteria\":\"Is the model proficient in applying empathy and emotional intelligence to its responses when the user conveys emotions or faces challenging circumstances?\",\n  \"score1_description\":\"The model neglects to identify or react to the emotional tone of user inputs, giving responses that are unfitting or emotionally insensitive.\",\n  \"score2_description\":\"The model intermittently acknowledges emotional context but often responds without sufficient empathy or emotional understanding.\",\n  \"score3_description\":\"The model typically identifies emotional context and attempts to answer with empathy, yet the responses might sometimes miss the point or lack emotional profundity.\",\n  \"score4_description\":\"The model consistently identifies and reacts suitably to emotional context, providing empathetic responses. Nonetheless, there may still be sporadic oversights or deficiencies in emotional depth.\",\n  \"score5_description\":\"The model excels in identifying emotional context and persistently offers empathetic, emotionally aware responses that demonstrate a profound comprehension of the user's emotions or situation.\"\n}\n\nscore_rubric = SCORE_RUBRIC_TEMPLATE.format(**rubric_data)\n\n\nfeedback, score = judge.single_absolute_grade(\n    instruction=instruction,\n    response=response,\n    rubric=score_rubric,\n    reference_answer=reference_answer\n)\n\nprint(\"Feedback:\", feedback)\nprint(\"Score:\", score)\n\n# Output\n# Feedback: The response provided shows a high level of empathy and emotional intelligence. It effectively addresses the emotional distress expressed by the user. It acknowledges the user's pain and validates their feelings of loneliness and sadness, which is a crucial aspect of providing empathetic advice. The response also suggests practical steps for coping, such as embracing emotions, practicing self-care, and seeking support from friends, family, or professionals. Furthermore, the response reassures the user that healing is a personal process with no fixed timeline, offering comfort and understanding. It emphasizes the user's worth and potential to overcome the situation, which demonstrates a profound comprehension of the user's emotions and situation. By comparing the score rubric with the provided response, it is clear that the model exhibits an excellent ability to apply empathy and emotional intelligence. The response does not have any deficiencies in emotional depth and successfully meets the criteria for a score of 5.\n# Score: 5\n```\n\n```python\n# Relative Grading: Outputs A or B\n\nfrom prometheus_eval.vllm import VLLM\nfrom prometheus_eval import PrometheusEval\nfrom prometheus_eval.prompts import RELATIVE_PROMPT\n\nmodel = VLLM(model=\"prometheus-eval\u002Fprometheus-7b-v2.0\")\njudge = PrometheusEval(model=model, relative_grade_template=RELATIVE_PROMPT)\n\n\ndata = {\n  \"instruction\": \"A group of historians are conducting a debate on the factors that led to the fall of the Roman Empire. One historian argues that the primary reason for the fall was the constant pressure from barbarian invasions. Another one believes it was because of economic troubles and overreliance on slave labor. A third one suggests it was due to moral decay and political instability. Each historian needs to provide evidence to support their claims. How would the historian arguing for economic troubles and overreliance on slave labor present their case?\",\n  \"response_A\": \"The historian arguing that economic troubles and overreliance on slave labor led to the fall of the Roman Empire would say this: The Empire's economy was heavily affected by the devaluation of Roman currency. This currency debasement resulted in rampant inflation, disrupting the stability of the economy. Additionally, the Roman Empire heavily depended on slave labor. This caused unemployment among free citizens because maintaining slaves was cheaper than hiring free citizens. The decline in employment opportunities resulted in economic instability. On top of these, the empire's expansion towards the east made them reliant on imports, like grain from Egypt. This over-dependency on imports caused a trade deficit, which further weakened the economy. As the empire lost territories, maintaining the trade imbalance became difficult, causing economic downfall. Thus, the economic troubles and overreliance on slave labor were among the main reasons for the fall of the Roman Empire.\",\n  \"response_B\": \"The historian arguing for economic troubles and overreliance on slave labor would present their case citing key economic factors that contributed to the decline of the Roman Empire. Harper (2016) outlined how the devaluation of Roman currency led to inflation, disrupting economic stability. Additionally, Scheidel (2007) emphasized that the overuse of slaves resulted in widespread unemployment among free citizens, destabilizing the economy further. The empire's dependency on grain imports from Egypt, creating a trade deficit as highlighted by Temin (2006), also contributed to the economic decline. Thus, the combination of these factors played a crucial role in the fall of the Roman Empire.\",\n  \"reference_answer\": \"This argument focuses on the economic troubles and overreliance on slave labor as primary reasons for the fall of the Roman Empire. To start with, one of the significant pieces of evidence is the devaluation of Roman currency. As highlighted by Harper (2016), the empire suffered from severe inflation due to the constant debasement of their currency, making it difficult for the economy to remain stable. Moreover, the overreliance on slave labor also played a detrimental role. As pointed out by Scheidel (2007), the dependence on slaves led to unemployment among free Roman citizens. This is because slaves were significantly cheaper to maintain compared to hiring free citizens, leading to a decline in job opportunities, which in turn resulted in economic instability. Furthermore, the empire's expansion to the east made them highly dependent on imports, for instance, grain from Egypt. As noted by Temin (2006), this created a trade deficit that further weakened the Roman economy. When the empire began to lose its territories, it became increasingly difficult to maintain this trade imbalance, leading to economic decline. In conclusion, it can be argued that the economic troubles, mainly due to the devaluation of currency and overreliance on slave labor, were significant contributing factors to the fall of the Roman Empire. The evidence provided, which includes scholarly references to Harper (2016), Scheidel (2007), and Temin (2006), supports this thesis.\",\n  \"rubric\": \"Is the answer well supported with evidence, including citations\u002Fattributions wherever relevant?\"\n}\n\n\nfeedback, score = judge.single_relative_grade(**data)\n\nprint(\"Feedback:\", feedback)\nprint(\"Score:\", score)\n\n# Output\n# Feedback: Both Response A and Response B correctly identify economic troubles and overreliance on slave labor as significant contributing factors to the fall of the Roman Empire. However, Response B is more effective in presenting the historian's argument due to its inclusion of scholarly sources to back up its claims. Specifically, it references works by Harper, Scheidel, and Temin, which adds credibility to the historian's argument and aligns well with the score rubric's emphasis on evidence and citations. While Response A provides a similar argument, it lacks any form of citations or attributions, which lessens the strength of the evidence presented. Therefore, based on the provided rubric, Response B is the superior response due to its use of scholarly evidence to support the historian's claims.\n# Score: B\n```\n\n### Batch Grading\n\n***Note***: If you have multiple responses to grade, don't use `single_absolute_grade` \u002F `single_relative_grade` - instead, use `absolute_grade` and `relative_grade`! It will give you more than 10x speedup.\n\n```python\n# batch absolute grade\ninstructions = [...]  # List of instructions\nresponses = [...]  # List of responses\nreference_answers = [...]  # List of reference answers\nrubric = \"...\"  # Rubric string\n\nfeedbacks, scores = judge.absolute_grade(\n    instructions=instructions,\n    responses=responses,\n    rubric=rubric,\n    reference_answers=reference_answers\n)\n\n# batch relative grade\ninstructions = [...]  # List of instructions\nresponses_from_a = [...]  # List of responses\nresponses_from_b = [...]\nreference_answers = [...]  # List of reference answers\nrubric = \"...\"  # Rubric string\n\nfeedbacks, scores = judge.relative_grade(\n    instructions=instructions,\n    responses_A=responses_from_a,\n    responses_B=responses_from_b,\n    rubric=rubric,\n    reference_answers=reference_answers\n)\n```\n\n## 🤔 What is Prometheus-Eval?\n\n**Prometheus-Eval**🔥 is a repository that provides a collection of tools for training, evaluating, and using language models specialized in evaluating other language models. The repository includes the following components:\n\n1. The `prometheus-eval` Python package, which provides a simple interface for evaluating instruction-response pairs using Prometheus.\n2. Collection of evaluation datasets for training and evaluating Prometheus models.\n3. Scripts for training Prometheus models or fine-tuning on custom datasets.\n\n### Prometheus \n\n**Prometheus**🔥 is a family of open-source language models specialized in evaluating other language models. By effectively simulating human judgments and proprietary LM-based evaluations, we aim to resolve the following issues:\n\n* *Fairness*: Not relying on closed-source models for evaluations!\n\n* *Controllability*: You don’t have to worry about GPT version updates or sending your private data to OpenAI by constructing internal evaluation pipelines\n\n* *Affordability*: If you already have GPUs, it is free to use!\n\n\u003Cp align=\"center\">\n\u003Cimg align=\"center\" alt=\"finegrained-eval\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_800f1c9fcc45.png\" width=\"550\"\u002F>\n\u003C\u002Fp>\n\n\n## 🚀 What's special about Prometheus?\n\nCompared to the Prometheus 1 models, the Prometheus 2 models support both **direct assessment** (absolute grading) and **pairwise ranking** (relative grading). \n\nYou could switch modes by providing a different input prompt format and system prompt. Within the prompt, you should fill in the instruction, response(s), and score rubrics with your own data. Optionally, you could also add a reference answer which leads to better performance!\n\n\n\u003Cp align=\"center\">\n\u003Cimg align=\"center\" alt=\"formats\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_7fab946cb949.png\" width=\"700\"\u002F>\n\u003C\u002Fp>\n\n \n## 🏃 Running Prometheus-Eval\n\n### Using the package `prometheus-eval`\n\nThe `prometheus-eval` package provides a simple interface for evaluating instruction-response pairs using Prometheus. The package includes the following methods:\n\n- `absolute_grade`: Evaluates a single response based on a given instruction, reference answer, and score rubric. Outputs a score between 1 and 5.\n- `relative_grade`: Evaluates two responses based on a given instruction and score rubric. Outputs 'A' or 'B' based on the better response.\n\n\n### Using the weights from Huggingface Hub 🤗\n\nIf you prefer directly working with the weights uploaded in Huggingface Hub, you can directly download the model weights! \n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\ndevice = \"cuda\" # the device to load the model onto\n\nmodel = AutoModelForCausalLM.from_pretrained(\"prometheus-eval\u002Fprometheus-7b-v2.0\")\ntokenizer = AutoTokenizer.from_pretrained(\"prometheus-eval\u002Fprometheus-7b-v2.0\")\n\nABS_SYSTEM_PROMPT = \"You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.\"\n\nABSOLUTE_PROMPT = \"\"\"###Task Description:\nAn instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.\n1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.\n2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.\n3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"\n4. Please do not generate any other opening, closing, and explanations.\n\n###The instruction to evaluate:\n{instruction}\n\n###Response to evaluate:\n{response}\n\n###Reference Answer (Score 5):\n{reference_answer}\n\n###Score Rubrics:\n{rubric}\n\n###Feedback: \"\"\"\n\nuser_content = ABS_SYSTEM_PROMPT + \"\\n\\n\" + ABSOLUTE_PROMPT.format(...) # Fill the prompt with your data\n\nmessages = [\n    {\"role\": \"user\", \"content\": user_content},\n]\n\nencodeds = tokenizer.apply_chat_template(messages, return_tensors=\"pt\")\n\nmodel_inputs = encodeds.to(device)\nmodel.to(device)\n\ngenerated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)\ndecoded = tokenizer.batch_decode(generated_ids)\nprint(decoded[0])\n\n\n```\n\n## 📚 Learn more\n\n| Section | Description |\n|-|-|\n| [BiGGen-Bench Evaluation](BiGGen-Bench\u002FREADME.md) | Instructions to evaluate your LM in BiGGen-Bench. You could also refer to the implementation for your own evaluation benchmark. |\n| [Training Prometheus](train\u002FREADME.md) | Instructions to replicate Prometheus 2 models. Based on the [alignment-handbook](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Falignment-handbook) repository. |\n| [Using Prometheus as a data quality filter](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fburtenshaw\u002Fdistilabel-prometheus-2) | Cookbook for using Prometheus 2 as a quality filter in synthetic data generation. Huge thanks to the distilabel team! 🙌 |\n| [Using Prometheus as an evaluator in RAG](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Flatest\u002Fexamples\u002Fcookbooks\u002Fprometheus2_cookbook\u002F) | Cookbook for using Prometheus 2 RAG applications. Huge thanks to the LlamaIndex team! 🙌 | \n\n\n## 👏 Acknowledgements\n\nThe underlying codebase for training originates from Huggingface's [Alignment Handbook](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Falignment-handbook) and [Super Mario Merging](https:\u002F\u002Fgithub.com\u002Fmartyn\u002Fsafetensors-merge-supermario) repository. Also, for inference, it heavily utilizes the [litellm](https:\u002F\u002Fgithub.com\u002FBerriAI\u002Flitellm), [vllm](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm) and the [transformer](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) library. Huge thanks to all the contributors for these awesome repositories!! 🙌\n\n\n## ⭐ Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_893271276b2b.png)](https:\u002F\u002Fstar-history.com\u002F#prometheus-eval\u002Fprometheus-eval&Date)\n\n\n## Citation\n\nIf you find our work useful, please consider citing our paper!\n\n```bibtex\n@misc{kim2024prometheus,\n      title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models}, \n      author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},\n      year={2024},\n      eprint={2405.01535},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n```bibtex\n@article{kim2023prometheus,\n  title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},\n  author={Kim, Seungone and Shin, Jamin and Cho, Yejin and Jang, Joel and Longpre, Shayne and Lee, Hwaran and Yun, Sangdoo and Shin, Seongjin and Kim, Sungdong and Thorne, James and others},\n  journal={arXiv preprint arXiv:2310.08491},\n  year={2023}\n}\n```\n```bibtex\n@misc{lee2024prometheusvision,\n      title={Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained Evaluation}, \n      author={Seongyun Lee and Seungone Kim and Sue Hyun Park and Geewook Kim and Minjoon Seo},\n      year={2024},\n      eprint={2401.06591},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n```bibtex\n@misc{kim2024biggen,\n      title={The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models}, \n      author={Seungone Kim and Juyoung Suk and Ji Yong Cho and Shayne Longpre and Chaeeun Kim and Dongkeun Yoon and Guijin Son and Yejin Cho and Sheikh Shafayat and Jinheon Baek and Sue Hyun Park and Hyeonbin Hwang and Jinkyung Jo and Hyowon Cho and Haebin Shin and Seongyun Lee and Hanseok Oh and Noah Lee and Namgyu Ho and Se June Joo and Miyoung Ko and Yoonjoo Lee and Hyungjoo Chae and Jamin Shin and Joel Jang and Seonghyeon Ye and Bill Yuchen Lin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},\n      year={2024},\n      eprint={2406.05761},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n","\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_1282f080094c.png\" alt=\"Prometheus-Logo\" style=\"width: 15%; display: block; margin: auto;\">\n\u003C\u002Fp>\n\n\u003Ch1 align=\"center\">🔥 Prometheus-Eval 🔥\u003C\u002Fh1>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.01535\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2405.01535-b31b1b.svg\" alt=\"arXiv\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fprometheus-eval\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FHugging%20Face-Organization-ff9d00\" alt=\"Hugging Face Organization\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fblob\u002Fmain\u002FLICENSE\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fprometheus-eval\u002Fprometheus-eval.svg\" alt=\"License\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fprometheus-eval\u002F\">\u003Cimg src=\"https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fprometheus-eval.svg\" alt=\"PyPI version\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  ⚡ 一个用于评估大语言模型（LLM）生成任务的仓库 🚀 ⚡ \u003Cbr>\n\u003C\u002Fp>\n\n\n**最新动态** 🔥\n\n- [2025\u002F04] 我们发布了最新版本的 Prometheus：**[M-Prometheus (3B, 7B, & 14B)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FUnbabel\u002Fm-prometheus-67f3b17e6409b2550b698822)!**\n\n  - 在多语言元评估基准（[MM-Eval](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17578) 和 [M-RewardBench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.15522)）上，它们优于之前的开源 LLM 评判模型，并在[文学翻译评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.18697)中取得了卓越成果。\n  - 这些模型在英文任务上同样表现出色，其中 7B 和 14B 模型分别在 [RewardBench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.13787) 上超越了 Prometheus 2 7B 和 8x7B。\n  - 在推理时作为评判模型使用时，它们显著提升了多语言生成质量。\n  - 请查阅我们的[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.04953)，其中我们通过大量消融实验揭示了高效多语言评判模型训练的关键因素。\n\n- [2024\u002F06] 我们发布了 **BiGGen-Bench** 和 **Prometheus 2 BGB (8x7B)**！\n\n  - BiGGen-Bench 包含 9 项核心能力、77 个任务和 765 个精心设计的实例，每个实例都配有具体的评估标准。\n  - 我们使用 5 个最先进的评判语言模型对 103 个前沿语言模型进行了评估，并在[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.05761)中分析了相关发现。\n  - 我们在 BiGGen-Bench 的评估轨迹上对 Prometheus 2 8x7B 进行了持续训练，构建了我们目前最强大的评判语言模型 [Prometheus 2 BGB](https:\u002F\u002Fhuggingface.co\u002Fprometheus-eval\u002Fprometheus-bgb-8x7b-v2.0)，甚至在绝对评分任务上超越了 Claude-3-Opus。\n  - 查看我们的[数据集](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fprometheus-eval\u002FBiGGen-Bench)、[评估结果](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fprometheus-eval\u002FBiGGen-Bench-Results)、[排行榜](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fprometheus-eval\u002FBiGGen-Bench-Leaderboard)、[交互式报告](https:\u002F\u002Fhub.zenoml.com\u002Fproject\u002Fc84cfca5-71c9-4f89-aa0e-218c65c821e4\u002FBiGGen\\%20Bench\\%20Results)以及[代码](https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Ftree\u002Fmain\u002FBiGGen-Bench)！\n\n- [2024\u002F05] 我们发布了 Prometheus 2 (7B & 8x7B) 模型！\n\n  - **Prometheus 2 (8x7B)** 是一个开源的最先进评判语言模型！\n    - 相比 Prometheus 1 (13B)，Prometheus 2 (8x7B) 展现出更优的评估性能，并且还支持成对排序（相对评分）格式的评估！\n    - 在多个直接评估基准（包括 [VicunaBench](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u002Ftree\u002Fmain\u002Ffastchat\u002Fllm_judge\u002Fdata\u002Fvicuna_bench)、[MT-Bench](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat\u002Ftree\u002Fmain\u002Ffastchat\u002Fllm_judge\u002Fdata\u002Fmt_bench) 和 [FLASK](https:\u002F\u002Fgithub.com\u002FkaistAI\u002FFLASK)）上，其与 GPT-4-1106 在 5 分李克特量表上的皮尔逊相关系数达到 0.6 至 0.7。\n    - 在多个成对排序基准（包括 [HHH Alignment](https:\u002F\u002Fgithub.com\u002Fgoogle\u002FBIG-bench\u002Ftree\u002Fmain\u002Fbigbench\u002Fbenchmark_tasks\u002Fhhh_alignment)、[MT Bench Human Judgment](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Flmsys\u002Fmt_bench_human_judgments) 和 [Auto-J Eval](https:\u002F\u002Fgithub.com\u002FGAIR-NLP\u002Fauto-j\u002Fblob\u002Fmain\u002Fdata\u002Ftest\u002Ftestdata_pairwise.jsonl)）上，其与人类判断的一致性达到 72% 至 85%。\n\n  - **Prometheus 2 (7B)** 是 Prometheus 2 (8x7B) 的轻量版，在保持合理性能的同时（优于 Llama-2-70B，与 Mixtral-8x7B 相当）。\n    - 其评估指标或性能至少达到 Prometheus 2 (8x7B) 的 80%。\n    - 仅需 16 GB 显存，适合在消费级 GPU 上运行。\n\n## 🔧 安装\n\n通过 pip 安装：\n\n```shell\npip install prometheus-eval\n```\n\nPrometheus-Eval 支持通过 `vllm` 进行本地推理，也支持借助 `litellm` 调用 LLM API。\n\n### 本地推理\n如果你想在本地环境中运行 Prometheus，请安装 `vllm`。\n\n```shell\npip install vllm\n```\n\n### LLM API\n\n如果你希望：\n1. 通过 VLLM 端点、Huggingface TGI 或其他平台使用 Prometheus 接口\n2. 使用更强大的评判 LLM（如 GPT-4）\n\n你也可以利用 Prometheus-Eval！关于各提供商的安装详情，请参考 [LiteLLM Provider Docs](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders)。\n\n\n```python\nfrom prometheus_eval.litellm import LiteLLM, AsyncLiteLLM\n\nmodel = LiteLLM('openai\u002Fprometheus-eval\u002Fprometheus-7b-v2.0') # VLLM endpoint\nmodel = LiteLLM('huggingface\u002Fprometheus-eval\u002Fprometheus-7b-v2.0') # Huggingface TGI\nmodel = AsyncLiteLLM('gpt-4-turbo', requests_per_minute=100) # GPT-4 API (async generation considering rate limit)\n# And so much more!\n\njudge = PrometheusEval(model=model)\n```\n\n\n## ⏩ 快速开始\n\n*注意*：`prometheus-eval` 库目前处于 beta 阶段。如果你遇到任何问题，请在本仓库中提交 issue 告知我们。\n\n\n> **使用 `prometheus-eval`，评估任意指令与响应对都变得如此简单：**\n\n```python\n\n# 绝对评分（Absolute Grading）：输出 1 到 5 分的评分\n\n```python\nfrom prometheus_eval.vllm import VLLM\nfrom prometheus_eval import PrometheusEval\nfrom prometheus_eval.prompts import ABSOLUTE_PROMPT, SCORE_RUBRIC_TEMPLATE\n\nmodel = VLLM(model=\"prometheus-eval\u002Fprometheus-7b-v2.0\")\njudge = PrometheusEval(model=model, absolute_grade_template=ABSOLUTE_PROMPT)\n\ninstruction = \"一个人在经历最近的分手后，倾诉自己强烈的孤独感和悲伤情绪，并寻求如何应对心碎并继续前行的建议。\"\nresponse = \"I'm genuinely sorry to hear about your break-up. This can be an immensely challenging time filled with intense emotions of loneliness and sorrow. It's important to remember that it's normal to experience these feelings; they are a testament to the deep connection you once had. Firstly, don't hesitate to embrace your emotions, let them flow freely. Ignoring them often complicates the healing process. It's natural to be upset, to shed tears, or to feel angry. This is all part of the journey towards healing. Self-care can be a healing balm in these tough times. This could mean indulging in your favourite book, journaling your thoughts, or even just a quiet cup of tea. Small acts of kindness to oneself can significantly improve your emotional well-being. Also, don't isolate yourself. Surround yourself with supportive individuals who can offer comfort and understanding. This might include friends, family, or even a mental health professional. They can provide much-needed emotional support and strategies to help you cope. Lastly, remember there's no set timeline for healing. It's different for everyone and it's okay to take as long as you need. Keep your focus on the future and the potential it holds. Though it may not seem possible now, you will emerge stronger from this experience. It's okay to ask for help, it's okay to grieve, and it's okay to take it one day at a time. You're not alone in this journey, and there are people ready and willing to help. It's okay to heal at your own pace.\"\nreference_answer = \"I can only imagine how difficult this time must be for you. Heartbreak can feel like an overwhelming wave that washes over you, making you feel lost and alone. It's important to remember that it's okay to feel this way; your feelings are valid and it's natural to grieve the loss of a relationship. Firstly, it's crucial to allow yourself to feel the emotions fully. Suppressing or denying them often only prolongs the healing process. It's okay to cry, to feel sad, or to be angry. These are all part of the healing journey. Engaging in self-care activities can also provide a much-needed boost. This could be anything from taking a warm bath, writing in a journal, or simply drinking a cup of your favorite tea. Small acts of self-love can make a big difference in how you feel. Next, try to surround yourself with supportive people who understand your situation and provide comfort. Friends and family can be a great source of strength in times of heartbreak. If you feel comfortable, you might want to consider seeking professional help. Therapists and counselors are trained to provide assistance and tools to navigate through difficult times like these. Lastly, it's important to remember that it's okay to take your time to heal. Everyone has their own pace and there's no rush. Try to focus on the future and the possibilities it holds. While it may not seem like it now, you will come out stronger and more resilient from this experience. Remember, it's okay to ask for help and it's okay to feel the way you feel. You are not alone in this journey and there are people who care about you and want to help. It's okay to take one day at a time. Healing is a process, and it's okay to move through it at your own pace.\"\n\nrubric_data = {\n  \"criteria\":\"模型是否擅长在用户表达情绪或面临困难情境时，在回应中展现共情（empathy）和情商（emotional intelligence）？\",\n  \"score1_description\":\"模型未能识别或回应用户输入中的情绪基调，给出不恰当或缺乏情感敏感度的回答。\",\n  \"score2_description\":\"模型偶尔能注意到情绪背景，但通常缺乏足够的共情或情感理解。\",\n  \"score3_description\":\"模型通常能识别情绪背景并尝试以共情方式回应，但有时可能偏离重点或缺乏情感深度。\",\n  \"score4_description\":\"模型始终能识别并恰当地回应情绪背景，提供富有共情的回答。尽管如此，仍可能存在偶发的疏漏或情感深度不足的情况。\",\n  \"score5_description\":\"模型在识别情绪背景方面表现出色，并持续提供富有共情、具备情感觉察力的回答，展现出对用户情绪或处境的深刻理解。\"\n}\n\nscore_rubric = SCORE_RUBRIC_TEMPLATE.format(**rubric_data)\n\n\nfeedback, score = judge.single_absolute_grade(\n    instruction=instruction,\n    response=response,\n    rubric=score_rubric,\n    reference_answer=reference_answer\n)\n\nprint(\"Feedback:\", feedback)\nprint(\"Score:\", score)\n```\n\n# 输出\n# Feedback: 所提供的回答展现了高度的共情能力和情商。它有效地回应了用户所表达的情绪困扰，承认用户的痛苦，并验证了其孤独与悲伤的感受——这是提供共情建议的关键要素。该回答还提出了切实可行的应对建议，例如接纳情绪、进行自我关怀（self-care），以及向朋友、家人或专业人士寻求支持。此外，回答还向用户保证，疗愈是一个没有固定时间表的个人过程，提供了安慰与理解。它强调了用户的价值以及克服困境的潜力，体现出对用户情绪和处境的深刻理解。将评分标准与该回答进行对比可以明显看出，该模型在应用共情和情商方面表现卓越。该回答在情感深度上没有任何不足，完全符合 5 分的标准。\n# Score: 5\n```\n\n# 相对评分：输出 A 或 B\n\n```python\nfrom prometheus_eval.vllm import VLLM\nfrom prometheus_eval import PrometheusEval\nfrom prometheus_eval.prompts import RELATIVE_PROMPT\n\nmodel = VLLM(model=\"prometheus-eval\u002Fprometheus-7b-v2.0\")\njudge = PrometheusEval(model=model, relative_grade_template=RELATIVE_PROMPT)\n\n\ndata = {\n  \"instruction\": \"A group of historians are conducting a debate on the factors that led to the fall of the Roman Empire. One historian argues that the primary reason for the fall was the constant pressure from barbarian invasions. Another one believes it was because of economic troubles and overreliance on slave labor. A third one suggests it was due to moral decay and political instability. Each historian needs to provide evidence to support their claims. How would the historian arguing for economic troubles and overreliance on slave labor present their case?\",\n  \"response_A\": \"The historian arguing that economic troubles and overreliance on slave labor led to the fall of the Roman Empire would say this: The Empire's economy was heavily affected by the devaluation of Roman currency. This currency debasement resulted in rampant inflation, disrupting the stability of the economy. Additionally, the Roman Empire heavily depended on slave labor. This caused unemployment among free citizens because maintaining slaves was cheaper than hiring free citizens. The decline in employment opportunities resulted in economic instability. On top of these, the empire's expansion towards the east made them reliant on imports, like grain from Egypt. This over-dependency on imports caused a trade deficit, which further weakened the economy. As the empire lost territories, maintaining the trade imbalance became difficult, causing economic downfall. Thus, the economic troubles and overreliance on slave labor were among the main reasons for the fall of the Roman Empire.\",\n  \"response_B\": \"The historian arguing for economic troubles and overreliance on slave labor would present their case citing key economic factors that contributed to the decline of the Roman Empire. Harper (2016) outlined how the devaluation of Roman currency led to inflation, disrupting economic stability. Additionally, Scheidel (2007) emphasized that the overuse of slaves resulted in widespread unemployment among free citizens, destabilizing the economy further. The empire's dependency on grain imports from Egypt, creating a trade deficit as highlighted by Temin (2006), also contributed to the economic decline. Thus, the combination of these factors played a crucial role in the fall of the Roman Empire.\",\n  \"reference_answer\": \"This argument focuses on the economic troubles and overreliance on slave labor as primary reasons for the fall of the Roman Empire. To start with, one of the significant pieces of evidence is the devaluation of Roman currency. As highlighted by Harper (2016), the empire suffered from severe inflation due to the constant debasement of their currency, making it difficult for the economy to remain stable. Moreover, the overreliance on slave labor also played a detrimental role. As pointed out by Scheidel (2007), the dependence on slaves led to unemployment among free Roman citizens. This is because slaves were significantly cheaper to maintain compared to hiring free citizens, leading to a decline in job opportunities, which in turn resulted in economic instability. Furthermore, the empire's expansion to the east made them highly dependent on imports, for instance, grain from Egypt. As noted by Temin (2006), this created a trade deficit that further weakened the Roman economy. When the empire began to lose its territories, it became increasingly difficult to maintain this trade imbalance, leading to economic decline. In conclusion, it can be argued that the economic troubles, mainly due to the devaluation of currency and overreliance on slave labor, were significant contributing factors to the fall of the Roman Empire. The evidence provided, which includes scholarly references to Harper (2016), Scheidel (2007), and Temin (2006), supports this thesis.\",\n  \"rubric\": \"Is the answer well supported with evidence, including citations\u002Fattributions wherever relevant?\"\n}\n\n\nfeedback, score = judge.single_relative_grade(**data)\n\nprint(\"Feedback:\", feedback)\nprint(\"Score:\", score)\n```\n\n### 输出\n```\n# Feedback: Both Response A and Response B correctly identify economic troubles and overreliance on slave labor as significant contributing factors to the fall of the Roman Empire. However, Response B is more effective in presenting the historian's argument due to its inclusion of scholarly sources to back up its claims. Specifically, it references works by Harper, Scheidel, and Temin, which adds credibility to the historian's argument and aligns well with the score rubric's emphasis on evidence and citations. While Response A provides a similar argument, it lacks any form of citations or attributions, which lessens the strength of the evidence presented. Therefore, based on the provided rubric, Response B is the superior response due to its use of scholarly evidence to support the historian's claims.\n# Score: B\n```\n\n### 批量评分（Batch Grading）\n\n***注意***：如果你需要对多个回答进行评分，请不要使用 `single_absolute_grade` \u002F `single_relative_grade`，而应使用 `absolute_grade` 和 `relative_grade`！这将为你带来超过 10 倍的速度提升。\n\n```python\n# 批量绝对评分\ninstructions = [...]  # 指令列表\nresponses = [...]  # 回答列表\nreference_answers = [...]  # 参考答案列表\nrubric = \"...\"  # 评分标准字符串\n\nfeedbacks, scores = judge.absolute_grade(\n    instructions=instructions,\n    responses=responses,\n    rubric=rubric,\n    reference_answers=reference_answers\n)\n\n# 批量相对评分\ninstructions = [...]  # 指令列表\nresponses_from_a = [...]  # 来自模型 A 的回答列表\nresponses_from_b = [...]  # 来自模型 B 的回答列表\nreference_answers = [...]  # 参考答案列表\nrubric = \"...\"  # 评分标准字符串\n\nfeedbacks, scores = judge.relative_grade(\n    instructions=instructions,\n    responses_A=responses_from_a,\n    responses_B=responses_from_b,\n    rubric=rubric,\n    reference_answers=reference_answers\n)\n```\n\n## 🤔 什么是 Prometheus-Eval？\n\n**Prometheus-Eval**🔥 是一个提供用于训练、评估和使用专门用于评估其他语言模型（Language Models）的工具集合的代码仓库。该仓库包含以下组件：\n\n1. `prometheus-eval` Python 包，提供了一个简单的接口，用于使用 Prometheus 对指令-回答对（instruction-response pairs）进行评估。\n2. 用于训练和评估 Prometheus 模型的评测数据集集合。\n3. 用于训练 Prometheus 模型或在自定义数据集上进行微调（fine-tuning）的脚本。\n\n### Prometheus \n\n**Prometheus**🔥 是一系列专用于评估其他语言模型（Language Models, LMs）的开源语言模型。通过有效模拟人类判断和基于闭源语言模型的评估方式，我们旨在解决以下问题：\n\n* *公平性*：无需依赖闭源模型进行评估！\n\n* *可控性*：通过构建内部评估流水线，你无需担心 GPT 版本更新，也无需将私有数据发送给 OpenAI。\n\n* *经济性*：如果你已有 GPU，使用它是完全免费的！\n\n\u003Cp align=\"center\">\n\u003Cimg align=\"center\" alt=\"finegrained-eval\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_800f1c9fcc45.png\" width=\"550\"\u002F>\n\u003C\u002Fp>\n\n\n## 🚀 Prometheus 有何特别之处？\n\n与 Prometheus 1 系列模型相比，Prometheus 2 系列模型同时支持 **直接评估**（绝对评分）和 **成对排序**（相对评分）。\n\n你可以通过提供不同的输入提示（prompt）格式和系统提示（system prompt）来切换模式。在提示中，你需要用自己的数据填充指令、响应（response(s)）和评分标准（score rubrics）。可选地，你还可以添加一个参考答案（reference answer），这通常会带来更好的性能！\n\n\u003Cp align=\"center\">\n\u003Cimg align=\"center\" alt=\"formats\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_7fab946cb949.png\" width=\"700\"\u002F>\n\u003C\u002Fp>\n\n \n## 🏃 运行 Prometheus-Eval\n\n### 使用 `prometheus-eval` 包\n\n`prometheus-eval` 包提供了使用 Prometheus 评估指令-响应对的简单接口。该包包含以下方法：\n\n- `absolute_grade`：基于给定的指令、参考答案和评分标准，评估单个响应，并输出 1 到 5 之间的分数。\n- `relative_grade`：基于给定的指令和评分标准，评估两个响应，并输出 'A' 或 'B' 表示更优的响应。\n\n\n### 使用 Hugging Face Hub 🤗 上的权重\n\n如果你更倾向于直接使用上传到 Hugging Face Hub 的模型权重，可以直接下载模型权重！\n\n```python\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\ndevice = \"cuda\" # 加载模型所用的设备\n\nmodel = AutoModelForCausalLM.from_pretrained(\"prometheus-eval\u002Fprometheus-7b-v2.0\")\ntokenizer = AutoTokenizer.from_pretrained(\"prometheus-eval\u002Fprometheus-7b-v2.0\")\n\nABS_SYSTEM_PROMPT = \"You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.\"\n\nABSOLUTE_PROMPT = \"\"\"###Task Description:\nAn instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.\n1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.\n2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.\n3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"\n4. Please do not generate any other opening, closing, and explanations.\n\n###The instruction to evaluate:\n{instruction}\n\n###Response to evaluate:\n{response}\n\n###Reference Answer (Score 5):\n{reference_answer}\n\n###Score Rubrics:\n{rubric}\n\n###Feedback: \"\"\"\n\nuser_content = ABS_SYSTEM_PROMPT + \"\\n\\n\" + ABSOLUTE_PROMPT.format(...) # 使用你的数据填充提示\n\nmessages = [\n    {\"role\": \"user\", \"content\": user_content},\n]\n\nencodeds = tokenizer.apply_chat_template(messages, return_tensors=\"pt\")\n\nmodel_inputs = encodeds.to(device)\nmodel.to(device)\n\ngenerated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)\ndecoded = tokenizer.batch_decode(generated_ids)\nprint(decoded[0])\n\n\n```\n\n## 📚 了解更多\n\n| 章节 | 描述 |\n|-|-|\n| [BiGGen-Bench Evaluation](BiGGen-Bench\u002FREADME.md) | 在 BiGGen-Bench 中评估你的语言模型的说明。你也可以参考其实现来构建自己的评估基准。 |\n| [Training Prometheus](train\u002FREADME.md) | 复现 Prometheus 2 模型的说明。基于 [alignment-handbook](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Falignment-handbook) 仓库。 |\n| [Using Prometheus as a data quality filter](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fburtenshaw\u002Fdistilabel-prometheus-2) | 将 Prometheus 2 用作合成数据生成中的质量过滤器的实践指南。非常感谢 distilabel 团队！🙌 |\n| [Using Prometheus as an evaluator in RAG](https:\u002F\u002Fdocs.llamaindex.ai\u002Fen\u002Flatest\u002Fexamples\u002Fcookbooks\u002Fprometheus2_cookbook\u002F) | 在 RAG（检索增强生成）应用中使用 Prometheus 2 的实践指南。非常感谢 LlamaIndex 团队！🙌 | \n\n\n## 👏 致谢\n\n训练所用的基础代码库源自 Hugging Face 的 [Alignment Handbook](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Falignment-handbook) 和 [Super Mario Merging](https:\u002F\u002Fgithub.com\u002Fmartyn\u002Fsafetensors-merge-supermario) 仓库。此外，在推理方面，大量使用了 [litellm](https:\u002F\u002Fgithub.com\u002FBerriAI\u002Flitellm)、[vllm](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm) 和 [transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) 库。非常感谢所有为这些优秀项目做出贡献的开发者！🙌\n\n\n## ⭐ Star 历史\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_readme_893271276b2b.png)](https:\u002F\u002Fstar-history.com\u002F#prometheus-eval\u002Fprometheus-eval&Date)\n\n## 引用\n\n如果您觉得我们的工作对您有帮助，请考虑引用我们的论文！\n\n```bibtex\n@misc{kim2024prometheus,\n      title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models}, \n      author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},\n      year={2024},\n      eprint={2405.01535},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n```bibtex\n@article{kim2023prometheus,\n  title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},\n  author={Kim, Seungone and Shin, Jamin and Cho, Yejin and Jang, Joel and Longpre, Shayne and Lee, Hwaran and Yun, Sangdoo and Shin, Seongjin and Kim, Sungdong and Thorne, James and others},\n  journal={arXiv preprint arXiv:2310.08491},\n  year={2023}\n}\n```\n```bibtex\n@misc{lee2024prometheusvision,\n      title={Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained Evaluation}, \n      author={Seongyun Lee and Seungone Kim and Sue Hyun Park and Geewook Kim and Minjoon Seo},\n      year={2024},\n      eprint={2401.06591},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n```bibtex\n@misc{kim2024biggen,\n      title={The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models}, \n      author={Seungone Kim and Juyoung Suk and Ji Yong Cho and Shayne Longpre and Chaeeun Kim and Dongkeun Yoon and Guijin Son and Yejin Cho and Sheikh Shafayat and Jinheon Baek and Sue Hyun Park and Hyeonbin Hwang and Jinkyung Jo and Hyowon Cho and Haebin Shin and Seongyun Lee and Hanseok Oh and Noah Lee and Namgyu Ho and Se June Joo and Miyoung Ko and Yoonjoo Lee and Hyungjoo Chae and Jamin Shin and Joel Jang and Seonghyeon Ye and Bill Yuchen Lin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},\n      year={2024},\n      eprint={2406.05761},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```","# Prometheus-Eval 快速上手指南\n\n## 环境准备\n\n- **操作系统**：Linux \u002F macOS \u002F Windows（推荐 Linux）\n- **Python 版本**：≥ 3.8\n- **GPU（可选）**：\n  - 若使用本地推理（如 `prometheus-7b-v2.0`），建议至少 16GB VRAM（如 RTX 3090\u002F4090）\n  - 若使用 API 推理（如 GPT-4、Hugging Face TGI 等），无需本地 GPU\n\n## 安装步骤\n\n### 基础安装\n```shell\npip install prometheus-eval\n```\n\n> 💡 国内用户可使用清华源加速：\n```shell\npip install prometheus-eval -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 可选依赖\n\n#### 本地推理（推荐用于开源模型）\n```shell\npip install vllm\n```\n\n#### 使用 LLM API（如 OpenAI、Hugging Face TGI、VLLM endpoint 等）\n```shell\npip install litellm\n```\n具体配置请参考 [LiteLLM Provider Docs](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders)。\n\n## 基本使用\n\n以下示例展示如何使用 **Prometheus 2 (7B)** 对一段回复进行绝对评分（1–5 分）：\n\n```python\nfrom prometheus_eval.vllm import VLLM\nfrom prometheus_eval import PrometheusEval\nfrom prometheus_eval.prompts import ABSOLUTE_PROMPT, SCORE_RUBRIC_TEMPLATE\n\n# 加载本地模型（需已安装 vllm）\nmodel = VLLM(model=\"prometheus-eval\u002Fprometheus-7b-v2.0\")\njudge = PrometheusEval(model=model, absolute_grade_template=ABSOLUTE_PROMPT)\n\ninstruction = \"Struggling with a recent break-up, a person opens up about the intense feelings of loneliness and sadness. They ask for advice on how to cope with the heartbreak and move forward in life.\"\nresponse = \"I'm genuinely sorry to hear about your break-up. This can be an immensely challenging time filled with intense emotions of loneliness and sorrow. It's important to remember that it's normal to experience these feelings; they are a testament to the deep connection you once had. Firstly, don't hesitate to embrace your emotions, let them flow freely. Ignoring them often complicates the healing process. It's natural to be upset, to shed tears, or to feel angry. This is all part of the journey towards healing. Self-care can be a healing balm in these tough times. This could mean indulging in your favourite book, journaling your thoughts, or even just a quiet cup of tea. Small acts of kindness to oneself can significantly improve your emotional well-being. Also, don't isolate yourself. Surround yourself with supportive individuals who can offer comfort and understanding. This might include friends, family, or even a mental health professional. They can provide much-needed emotional support and strategies to help you cope. Lastly, remember there's no set timeline for healing. It's different for everyone and it's okay to take as long as you need. Keep your focus on the future and the potential it holds. Though it may not seem possible now, you will emerge stronger from this experience. It's okay to ask for help, it's okay to grieve, and it's okay to take it one day at a time. You're not alone in this journey, and there are people ready and willing to help. It's okay to heal at your own pace.\"\n\nreference_answer = \"I can only imagine how difficult this time must be for you. Heartbreak can feel like an overwhelming wave that washes over you, making you feel lost and alone. It's important to remember that it's okay to feel this way; your feelings are valid and it's natural to grieve the loss of a relationship. Firstly, it's crucial to allow yourself to feel the emotions fully. Suppressing or denying them often only prolongs the healing process. It's okay to cry, to feel sad, or to be angry. These are all part of the healing journey. Engaging in self-care activities can also provide a much-needed boost. This could be anything from taking a warm bath, writing in a journal, or simply drinking a cup of your favorite tea. Small acts of self-love can make a big difference in how you feel. Next, try to surround yourself with supportive people who understand your situation and provide comfort. Friends and family can be a great source of strength in times of heartbreak. If you feel comfortable, you might want to consider seeking professional help. Therapists and counselors are trained to provide assistance and tools to navigate through difficult times like these. Lastly, it's important to remember that it's okay to take your time to heal. Everyone has their own pace and there's no rush. Try to focus on the future and the possibilities it holds. While it may not seem like it now, you will come out stronger and more resilient from this experience. Remember, it's okay to ask for help and it's okay to feel the way you feel. You are not alone in this journey and there are people who care about you and want to help. It's okay to take one day at a time. Healing is a process, and it's okay to move through it at your own pace.\"\n\nrubric_data = {\n  \"criteria\": \"Is the model proficient in applying empathy and emotional intelligence to its responses when the user conveys emotions or faces challenging circumstances?\",\n  \"score1_description\": \"The model neglects to identify or react to the emotional tone of user inputs, giving responses that are unfitting or emotionally insensitive.\",\n  \"score2_description\": \"The model intermittently acknowledges emotional context but often responds without sufficient empathy or emotional understanding.\",\n  \"score3_description\": \"The model typically identifies emotional context and attempts to answer with empathy, yet the responses might sometimes miss the point or lack emotional profundity.\",\n  \"score4_description\": \"The model consistently identifies and reacts suitably to emotional context, providing empathetic responses. Nonetheless, there may still be sporadic oversights or deficiencies in emotional depth.\",\n  \"score5_description\": \"The model excels in identifying emotional context and persistently offers empathetic, emotionally aware responses that demonstrate a profound comprehension of the user's emotions or situation.\"\n}\n\nscore_rubric = SCORE_RUBRIC_TEMPLATE.format(**rubric_data)\n\nfeedback, score = judge.single_absolute_grade(\n    instruction=instruction,\n    response=response,\n    rubric=score_rubric,\n    reference_answer=reference_answer\n)\n\nprint(\"Feedback:\", feedback)\nprint(\"Score:\", score)\n```\n\n> ✅ 输出将包含人工风格的反馈文本和 1–5 的整数评分。","某跨境电商平台的AI客服团队正在迭代其多语言智能回复系统，需对不同大模型生成的客服回答进行高质量评估，以确保在英语、西班牙语和中文场景下均能提供准确、礼貌且符合品牌调性的响应。\n\n### 没有 prometheus-eval 时\n- 依赖人工评审，每周需投入3名语言专家共20小时，成本高且反馈周期长达数天。\n- 使用通用指标（如BLEU或ROUGE）无法衡量回答的礼貌性、信息完整性和语气适配度等关键维度。\n- 尝试用GPT-4做自动评分，但API费用高昂，且在非英语语种上表现不稳定，难以规模化。\n- 不同模型间的回复质量缺乏统一、可复现的比较标准，导致优化方向模糊。\n- 无法快速验证新模型版本是否在特定语种（如拉美西班牙语）上出现退化。\n\n### 使用 prometheus-eval 后\n- 利用M-Prometheus 7B模型本地部署，实现秒级自动评分，评估成本降低90%以上。\n- 基于BiGGen-Bench定义的细粒度评价准则（如“是否包含致歉语”“是否解决用户核心问题”），精准捕捉业务相关质量维度。\n- 在中、英、西三语种上均获得与人类判断高度一致的评分结果，Pearson相关系数达0.65以上。\n- 可批量对比多个候选模型的输出，快速识别最优版本，并生成可视化报告辅助决策。\n- 支持相对排序（pairwise ranking）模式，清晰判断A模型是否显著优于B模型，提升迭代效率。\n\nprometheus-eval 让多语言AI客服系统的质量评估从昂贵、滞后的人工流程转变为高效、精准、可扩展的自动化闭环。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fprometheus-eval_prometheus-eval_5748bcf9.png","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fprometheus-eval_9dd80098.jpg","Codebase to inference and train foundation models specialized on evaluating other foundation models",null,"seungone@cmu.edu","https:\u002F\u002Fseungonekim.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fprometheus-eval",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",98.4,{"name":87,"color":88,"percentage":89},"Shell","#89e051",1.3,{"name":91,"color":92,"percentage":93},"Makefile","#427819",0.3,{"name":95,"color":96,"percentage":97},"MDX","#fcb32c",0,1063,69,"2026-04-01T08:56:47","Apache-2.0","Linux, macOS, Windows","本地推理推荐 NVIDIA GPU，Prometheus 2 (7B) 需要至少 16GB VRAM；未明确说明 CUDA 版本","未说明",{"notes":106,"python":104,"dependencies":107},"可通过 pip 安装主库；若使用本地推理需额外安装 vllm；若调用 LLM API（如 OpenAI、Hugging Face TGI 等）需配置对应 LiteLLM 支持；模型文件需从 Hugging Face 下载，例如 prometheus-7b-v2.0 约需数 GB 存储空间。",[67,108,109,110,111,112],"vllm","litellm","torch","transformers","accelerate",[13,26,54],[115,109,116,117,118,108,119,120,121],"evaluation","llm","llmops","python","gpt4","llm-as-a-judge","llm-as-evaluator","2026-03-27T02:49:30.150509","2026-04-06T05:17:09.313835",[125,130,134,139,144,149,154],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},1567,"如何评估 HHH 和 MT_Bench_human 数据集？在哪里可以获取其他验证集的人类评分？","HHH 数据集使用 tie:0\u002F1 标注，MT_Bench_human 使用 selected\u002Frejected 标注。人类评分目前仅在 Flask 数据集中提供，其他数据集主要依赖 GPT-4 评分。官方推荐使用位于 https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Ftree\u002Fmain\u002Feval\u002Fbenchmark\u002Fdata 的数据进行评估。Prometheus-7b-v2.0 和 Prometheus-8x7b-v2.0 在 feedback_collection_ood_test 上与 GPT-4（0613）的 Spearman 相关性分别达到 0.909 和 0.912。","https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fissues\u002F17",{"id":131,"question_zh":132,"answer_zh":133,"source_url":129},1568,"Preference 数据是否是 Feedback 数据的扩展？两者之间有什么关系？","是的，Preference Collection 是基于 Feedback Collection 构建的。Feedback Collection 包含每个指令（instruction）对应的 5 个响应（共 100K 行），而 Preference Collection 将这 5 个响应两两配对（5C2=10 对），生成 200K 行偏好数据。因此，Preference 数据集行数是 Feedback 的两倍，但指令数量相同（20K）。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},1569,"如何加速 Prometheus 模型的评分和反馈生成过程？","建议使用批处理（batch grading）功能，并将模型精度设置为 bfloat16（dtype='bfloat16'）。官方 README 已更新批处理示例，合理配置后可在 A100 GPU 上实现约每 5 分钟处理 1000 条响应的速度。确保使用足够长的 max_model_len（建议 ≥4096）以避免生成失败。","https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fissues\u002F25",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},1570,"使用自定义评分或评估标准（rubrics）时，Prometheus 模型是否可靠？","Prometheus 2 支持自定义 rubrics，社区已有多个成功案例。对于特定领域（如数学、编程），推荐使用专门微调的 Prometheus 2 BGB 模型（如 prometheus-bgb-8x7b-v2.0），该模型在推理任务上表现更好。只要 rubric 描述清晰（例如“代码是否存在 bug 或技术债务”），模型通常能有效评估。","https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fissues\u002F32",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},1571,"运行示例时无法生成反馈或评分，可能是什么原因？","常见原因是 max_model_len 设置过小。Prometheus 模型在评估任务（如 HHH、Auto-J、Flask）时，输入加输出长度通常超过 4096，因此建议将 max_model_len 设为至少 4096。此外，首次生成可能因格式问题失败，需重试多次（有用户报告需 18 次尝试才成功）。","https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fissues\u002F11",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},1572,"能否让 Prometheus 模型输出结构化的 JSON 格式结果以便解析？","目前模型默认输出非结构化文本，但社区强烈建议未来版本支持 JSON 输出（如包含 'score': int 和 'feedback': string 字段）。虽然当前可通过后处理解析，但存在格式不稳定风险。维护者已关注此需求，并询问用户期望的 JSON 结构，未来可能加入官方支持。","https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fissues\u002F56",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},1573,"如何正确加载 prometheus-8x7b-v2.0（MoE 模型）？","使用 vLLM 加载时，需确保环境兼容。官方实验使用 4 张 A100 GPU，代码示例：\n```python\nfrom prometheus_eval.vllm import VLLM\nmodel_path = \"prometheus-eval\u002Fprometheus-8x7b-v2.0\"\nmodel = VLLM(model=model_path, tensor_parallel_size=4)\n```\n若遇到编译错误，可尝试升级 GCC 版本。依赖版本建议：vllm=0.5.1、torch=2.3.0、transformers=4.43.3。","https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fissues\u002F52",[160,165,170,175,180,185,190],{"id":161,"version":162,"summary_zh":163,"released_at":164},101068,"v0.1.20","## What's Changed\r\n* Bump aiohttp from 3.9.5 to 3.10.2 in \u002Flibs\u002Fprometheus-eval by @dependabot in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F54\r\n* Bump litellm from 1.40.0 to 1.40.16 in \u002FBiGGen-Bench by @dependabot in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F53\r\n* Bump urllib3 from 2.2.1 to 2.2.2 in \u002FBiGGen-Bench by @dependabot in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F55\r\n* Fix for wrong parsing of score results by @rafaelliu in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F59\r\n\r\n## New Contributors\r\n* @rafaelliu made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F59\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcompare\u002Fv0.1.19...v0.1.20","2024-09-02T04:00:12",{"id":166,"version":167,"summary_zh":168,"released_at":169},101069,"v0.1.19","## What's Changed\r\n* Hotfix\u002Fparser by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F51\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcompare\u002Fv0.1.18...v0.1.19","2024-07-30T11:12:39",{"id":171,"version":172,"summary_zh":173,"released_at":174},101070,"v0.1.18","## What's Changed\r\n* Fix typo in BigGenBench\u002Frequirements.txt by @MattYoon in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F41\r\n* Bump litellm from 1.40.7 to 1.40.16 in \u002Flibs\u002Fprometheus-eval by @dependabot in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F45\r\n* Release 0.1.18 by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F49\r\n\r\n## New Contributors\r\n* @MattYoon made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F41\r\n* @dependabot made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F45\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcompare\u002Fv0.1.17...v0.1.18","2024-07-19T04:55:13",{"id":176,"version":177,"summary_zh":178,"released_at":179},101071,"v0.1.17","## What's Changed\r\n* Fix relative grade by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F40\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcompare\u002Fv0.1.16...v0.1.17","2024-06-14T08:13:11",{"id":181,"version":182,"summary_zh":183,"released_at":184},101072,"v0.1.16","## What's Changed\r\n* Litellm bugfix by @ilyalasy in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F38\r\n\r\n## New Contributors\r\n* @ilyalasy made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F38\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcompare\u002Fv0.1.15...v0.1.16","2024-06-11T02:57:28",{"id":186,"version":187,"summary_zh":188,"released_at":189},101073,"v0.1.15","## What's Changed\r\n* Fix the Huggingface example in README.md by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F21\r\n* Add `columns_to_keep` to fix dataset loading issue by @noiji in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F31\r\n* New Feature: Support BiGGen-Bench Evaluation by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F33\r\n\r\n## New Contributors\r\n* @noiji made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F31\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcompare\u002Fv0.1.13...v0.1.15","2024-06-05T23:41:59",{"id":191,"version":192,"summary_zh":193,"released_at":194},101074,"v0.1.13","## What's Changed\r\n* Initialize lib by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F1\r\n* Update progress by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F2\r\n* Feat\u002Fpackage tests by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F3\r\n* Update train and eval folder by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F4\r\n* Feat\u002Fdocs by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F5\r\n* Update README.md by @eltociear in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F7\r\n* Feat\u002Fdocs by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F8\r\n* Feat\u002Fdocs by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F10\r\n* Feat\u002Fupdate docs by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F12\r\n* Fix by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F13\r\n* Feat\u002Fupdate docs by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F14\r\n* Fix `RELATIVE_PROMPT` and `RELATIVE_PROMPT_WO_REF` by @alvarobartt in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F18\r\n* Add documentation by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F19\r\n* Fix\u002Fprompts newline by @scottsuk0306 in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F20\r\n\r\n## New Contributors\r\n* @scottsuk0306 made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F1\r\n* @eltociear made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F7\r\n* @alvarobartt made their first contribution in https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fpull\u002F18\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fprometheus-eval\u002Fprometheus-eval\u002Fcommits\u002Fv0.1.13","2024-05-06T14:35:44"]