[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-lmarena--arena-hard-auto":3,"similar-lmarena--arena-hard-auto":84},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":18,"owner_email":18,"owner_twitter":19,"owner_website":20,"owner_url":21,"languages":22,"stars":31,"forks":32,"last_commit_at":33,"license":34,"difficulty_score":35,"env_os":36,"env_gpu":37,"env_ram":36,"env_deps":38,"category_tags":46,"github_topics":18,"view_count":35,"oss_zip_url":18,"oss_zip_packed_at":18,"status":49,"created_at":50,"updated_at":51,"faqs":52,"releases":83},9493,"lmarena\u002Farena-hard-auto","arena-hard-auto","Arena-Hard-Auto: An automatic LLM benchmark. ","Arena-Hard-Auto 是一款专为指令微调大语言模型设计的自动化评测工具，旨在帮助开发者和研究人员在模型正式部署前，高效预测其在真实用户场景（如 LMArena Chatbot Arena）中的表现排名。\n\n传统的大模型评测往往依赖昂贵且耗时的人工投票，或者使用相关性较低的静态数据集。Arena-Hard-Auto 解决了这一痛点，它通过精心筛选的 500 个高难度真实世界查询（涵盖软件工程、数学推理等）以及 250 个创意写作任务，构建了一套极具挑战性的测试集。其核心亮点在于利用 GPT-4.1 和 Gemini-2.5 等先进模型作为“自动裁判”，以更低成本和更快速度模拟人类偏好判断。研究显示，该工具在开放域基准中与人类实际投票结果具有最高的相关性和模型区分度。\n\n此外，最新版本 Arena-Hard-v2.0 还引入了风格控制功能，进一步提升了评估的精细度。无论是希望优化模型性能的算法工程师，还是需要客观对比不同模型能力的科研人员，都可以利用 Arena-Hard-Auto 获得可靠、及时的反馈，从而更自信地推进模型迭代与应用落地。","\u003Cdiv align=\"center\">\n\n# Arena-Hard-Auto\n\n[![Github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArena--Hard-black?logo=github&logoColor=white&labelColor=black&color=black)](https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Arena--Hard-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11939) [![Hugging Face Collection](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArena--Hard-fcd022?logo=huggingface&logoColor=000&labelColor)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Flmarena-ai\u002Farena-hard-auto-680998796296d1462c729b6c) [![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLMArena--ai-white?logo=X&logoColor=000&color=000&labelColor=white)](https:\u002F\u002Fx.com\u002Flmarena_ai)\n\n\n\u003Cdiv align=\"center\" style=\"font-family: Arial, sans-serif;\">\n  \u003Cp>\n    \u003Ca href=\"#news\" style=\"text-decoration: none; font-weight: bold;\">News\u003C\u002Fa> •\n    \u003Ca href=\"#leaderboard\" style=\"text-decoration: none; font-weight: bold;\">Leaderboard\u003C\u002Fa> •\n    \u003Ca href=\"#install-dependencies\" style=\"text-decoration: none; font-weight: bold;\">Install\u003C\u002Fa> •\n    \u003Ca href=\"#evaluation\" style=\"text-decoration: none; font-weight: bold;\">Evaluation\u003C\u002Fa> •\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flmarena-ai\u002Farena-hard-viewer\" style=\"text-decoration: none; font-weight: bold;\">Demo\u003C\u002Fa> •\n    \u003Ca href=\"#citation\" style=\"text-decoration: none; font-weight: bold;\">Citation\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n\u003C\u002Fdiv>\n\n# News\n- **[Apr 23, 2025]** 🎉 **Arena-Hard-v2.0** is finally here! Better judges, new hard prompts, and additional eval for creative writing.\n- **[Oct 14, 2024]** 🎉 **Style Control** is now supported in Arena-Hard-Auto.\n\n## About\n\nArena-Hard-Auto is an automatic evaluation tool for instruction-tuned LLMs. Arena-Hard-Auto has the highest correlation and separability to LMArena (Chatbot Arena) among popular open-ended LLM benchmarks ([See Paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11939)). If you are curious to see how well your model might perform on LMArena before deploying, we recommend trying Arena-Hard-Auto's newest evaluation set, **Arena-Hard-v2.0-Preview**.\n\nV2.0 contains 500 fresh, challenging real-world user queries (open-ended software engineering problems, math questions, etc) and 250 creative writing queries sourced from Chatbot Arena. We employs automatic judges, GPT-4.1 and Gemini-2.5, as a cheaper and faster approximator to human preference.\n\nAlthough both Arena-Hard-Auto and Chatbot Arena Category Hard ([See Blog](https:\u002F\u002Flmsys.org\u002Fblog\u002F2024-05-17-category-hard\u002F)) employ similar pipeline to select hard prompts, Arena-Hard-Auto employs automatic judge as a cheaper and faster approximator to human preference. Checkout [BenchBuilder](BenchBuilder) folder for code and resources on how we curate Arena-Hard-Auto. In the paper we also purposed metrics, such as model separability and agreement to human preference, for evaluating benchmarks' ability to rank models (See [Evaluate Benchmarks](#evaluate-benchmarks) for more information and code).\n\n\n## Leaderboard\n\n### Arena-Hard-v2.0-Preview\n\nHard Prompt, Style Control, and Gemini-2.5 as Judge **(Official Configuration)**:\n```console\n                                      Model  Scores (%)         CI (%)\n0                             o3-2025-04-16        85.9  (-0.8 \u002F +0.9)\n1                   o4-mini-2025-04-16-high        79.1  (-1.4 \u002F +1.2)\n2                                gemini-2.5        79.0  (-2.1 \u002F +1.8)\n3                        o4-mini-2025-04-16        74.6  (-1.8 \u002F +1.6)\n4                          gemini-2.5-flash        68.6  (-1.6 \u002F +1.6)\n5                   o3-mini-2025-01-31-high        66.1  (-1.5 \u002F +2.1)\n6                        o1-2024-12-17-high        61.0  (-2.0 \u002F +2.1)\n7   claude-3-7-sonnet-20250219-thinking-16k        59.8  (-2.0 \u002F +1.8)\n8                           Qwen3-235B-A22B        58.4  (-1.9 \u002F +2.1)\n9                               deepseek-r1        58.0  (-2.2 \u002F +2.0)\n10                            o1-2024-12-17        55.9  (-2.2 \u002F +1.8)\n11                          gpt-4.5-preview        50.0  (-1.9 \u002F +2.0)\n12                       o3-mini-2025-01-31        50.0  (-0.0 \u002F +0.0)\n13                                  gpt-4.1        50.0  (-1.9 \u002F +1.7)\n14                             gpt-4.1-mini        46.9  (-2.4 \u002F +2.1)\n15                                Qwen3-32B        44.5  (-2.2 \u002F +2.1)\n16                                  QwQ-32B        43.5  (-2.5 \u002F +2.1)\n17                            Qwen3-30B-A3B        33.9  (-1.6 \u002F +1.5)\n18               claude-3-5-sonnet-20241022        33.0  (-2.3 \u002F +1.8)\n19                                 s1.1-32B        22.3  (-1.7 \u002F +1.5)\n20           llama4-maverick-instruct-basic        17.2  (-1.5 \u002F +1.2)\n21                           Athene-V2-Chat        16.4  (-1.4 \u002F +1.4)\n22                           gemma-3-27b-it        15.0  (-1.4 \u002F +1.0)\n23                                 Qwen3-4B        15.0  (-1.1 \u002F +1.5)\n24                             gpt-4.1-nano        13.7  (-1.1 \u002F +1.0)\n25       Llama-3.1-Nemotron-70B-Instruct-HF        10.3  (-0.8 \u002F +1.0)\n26                     Qwen2.5-72B-Instruct        10.1  (-0.9 \u002F +1.3)\n27                         OpenThinker2-32B         3.2  (-0.3 \u002F +0.3)\n```\n\nHard Prompt, Style Control, and GPT-4.1 as Judge **(If prefer OpenAI API)**\n```console\n                                      Model  Scores (%)         CI (%)\n0                             o3-2025-04-16        87.0  (-1.0 \u002F +1.0)\n1                   o4-mini-2025-04-16-high        81.7  (-1.2 \u002F +1.2)\n2                        o4-mini-2025-04-16        78.0  (-1.3 \u002F +1.4)\n3                   o3-mini-2025-01-31-high        64.8  (-2.1 \u002F +1.9)\n4                        o1-2024-12-17-high        58.7  (-2.3 \u002F +2.1)\n5                                   gpt-4.1        58.3  (-2.0 \u002F +2.3)\n6                             o1-2024-12-17        50.2  (-2.2 \u002F +1.8)\n7                        o3-mini-2025-01-31        50.0  (-0.0 \u002F +0.0)\n8                                gemini-2.5        49.1  (-2.5 \u002F +2.4)\n9                              gpt-4.1-mini        48.6  (-2.7 \u002F +1.9)\n10                              deepseek-r1        48.0  (-2.6 \u002F +2.3)\n11  claude-3-7-sonnet-20250219-thinking-16k        47.0  (-1.9 \u002F +2.3)\n12                          Qwen3-235B-A22B        46.7  (-1.9 \u002F +2.4)\n13                         gemini-2.5-flash        45.1  (-2.7 \u002F +2.1)\n14                          gpt-4.5-preview        43.0  (-1.9 \u002F +2.2)\n15                                  QwQ-32B        36.1  (-2.0 \u002F +2.2)\n16                                Qwen3-32B        35.8  (-2.1 \u002F +2.2)\n17                            Qwen3-30B-A3B        28.7  (-1.4 \u002F +2.1)\n18               claude-3-5-sonnet-20241022        25.8  (-1.7 \u002F +1.8)\n19                                 s1.1-32B        18.3  (-2.3 \u002F +2.2)\n20                             gpt-4.1-nano        15.4  (-1.1 \u002F +1.2)\n21                           Athene-V2-Chat        12.6  (-1.2 \u002F +1.3)\n22                                 Qwen3-4B        12.6  (-1.1 \u002F +1.5)\n23           llama4-maverick-instruct-basic        12.0  (-1.0 \u002F +1.2)\n24                           gemma-3-27b-it         9.7  (-0.9 \u002F +1.1)\n25                     Qwen2.5-72B-Instruct         8.0  (-0.7 \u002F +0.9)\n26       Llama-3.1-Nemotron-70B-Instruct-HF         6.8  (-0.6 \u002F +0.8)\n27                         OpenThinker2-32B         2.3  (-0.2 \u002F +0.3)\n```\n\nCreative Writing, Ensemble GPT-4.1 and Gemini 2.5 as Judges **(Best Configuration for Creative Writing)**\n```console\n                                      Model  Scores (%)         CI (%)\n0                                gemini-2.5        90.8  (-1.2 \u002F +1.3)\n1                             o3-2025-04-16        88.8  (-1.1 \u002F +1.0)\n2                          gemini-2.5-flash        83.9  (-1.3 \u002F +1.4)\n3                               deepseek-r1        77.0  (-2.0 \u002F +1.4)\n4                           Qwen3-235B-A22B        73.5  (-1.8 \u002F +1.5)\n5                            gemma-3-27b-it        69.9  (-1.9 \u002F +1.7)\n6   claude-3-7-sonnet-20250219-thinking-16k        63.9  (-1.7 \u002F +1.9)\n7                                   gpt-4.1        61.5  (-1.9 \u002F +1.9)\n8                                   QwQ-32B        60.9  (-2.0 \u002F +1.6)\n9                        o1-2024-12-17-high        59.9  (-2.1 \u002F +1.7)\n10                  o4-mini-2025-04-16-high        58.7  (-1.8 \u002F +1.9)\n11                            o1-2024-12-17        56.6  (-1.8 \u002F +1.8)\n12                       o4-mini-2025-04-16        55.6  (-1.8 \u002F +2.0)\n13                                Qwen3-32B        53.3  (-1.9 \u002F +1.6)\n14                          gpt-4.5-preview        51.4  (-1.9 \u002F +2.0)\n15                     gemini-2.0-flash-001        50.0  (-0.0 \u002F +0.0)\n16                  o3-mini-2025-01-31-high        43.0  (-1.7 \u002F +2.1)\n17                            Qwen3-30B-A3B        34.9  (-2.0 \u002F +1.6)\n18                             gpt-4.1-mini        28.2  (-1.8 \u002F +1.8)\n19       Llama-3.1-Nemotron-70B-Instruct-HF        26.9  (-2.0 \u002F +1.8)\n20               claude-3-5-sonnet-20241022        24.2  (-1.5 \u002F +1.5)\n21                         OpenThinker2-32B        23.6  (-1.5 \u002F +1.3)\n22                           Athene-V2-Chat        18.1  (-1.6 \u002F +1.5)\n23                                 Qwen3-4B        13.2  (-1.2 \u002F +1.2)\n24                             gpt-4.1-nano        10.7  (-1.1 \u002F +1.1)\n25           llama4-maverick-instruct-basic        10.5  (-1.1 \u002F +1.0)\n26                     Qwen2.5-72B-Instruct        10.2  (-1.1 \u002F +1.1)\n27                                 s1.1-32B         8.2  (-0.9 \u002F +0.\n```\n\nFor older leaderboards, such as Arena-Hard-v0.1, see [past-leaderboards](\u002Fmisc\u002Fpast_leaderboards.md)\n\n## Install Dependencies\n```\ngit clone https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto.git\ncd arena-hard\npip install -r requirements.txt\npip install -r requirements-optional.txt  # Optional dependencies (e.g., anthropic sdk)\n```\n\n## Download dataset\nWe have pre-generated many popular models answers and judgments. You can browse them with an online [demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flmarena-ai\u002Farena-hard-viewer) or download them (with [`git-lfs`](https:\u002F\u002Fgit-lfs.com) installed) by\n```console\n> git lfs install\n> git clone git@hf.co:datasets\u002Flmarena-ai\u002Farena-hard-auto arena-hard-data\n\u002F\u002F copy answers\u002Fjudgments to the data directory\n> cp -r arena-hard-data\u002Fdata . \n```\n\nThen run\n```console\n> python show_result.py\n                                      Model  Scores (%)         CI (%)\n0                             o3-2025-04-16        87.6  (-0.8 \u002F +1.0)\n1                   o4-mini-2025-04-16-high        82.7  (-1.4 \u002F +1.3)\n2                        o4-mini-2025-04-16        78.9  (-1.6 \u002F +1.6)\n```\n\n## Evaluate\n\n### Step 1. Set up the endpoint config to your model\n\nFill in your API endpoint in `config\u002Fapi_config.yaml`. We support OpenAI compatible API server, Anthropic, Vertex AI, and more. You will find examples in `config\u002Fapi_config.yaml`.\n\nYou may use inference engine such as [vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fserving\u002Fopenai_compatible_server.html) or [SGLang](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang?tab=readme-ov-file#using-local-models) to host your model with an OpenAI compatible API server.\n\nWe also include support for fast built-in inference with SGLang, see examples in `config\u002Fapi_config.yaml` and implementaton in `utils\u002Fcompletion.py`. See `misc\u002Fsglang_setup.bash` for environment setup.\n\n### Step 2. Generate Model Answers\n\nIn `config\u002Fgen_answer_config.yaml`, add your model name in `model_list`.\n\nRun the command to generate answers:\n```console\n> python gen_answer.py\n```\n\nCaching feature is implemented. The code will skip generating an answer when there is already an existing answer\u002Fjudgment to the same prompt (this feature is not supported for built-in SGLang server).\n\n### Step 3. Generate Judgments\n\nIn `config\u002Farena-hard-v2.0.yaml`, add your model name in `model_list`.\n```yaml\n...\n# Add your model below for evaluation\nmodel_list:\n  - deepseek-r1\n  - [YOUR-MODEL-NAME]\n```\n\nWe recommend employing GPT-4.1 as judge for fast, stable judge inference. To use Gemini-2.5, comment out:\n```yaml\njudge_model: gpt-4.1\ntemperature: 0.0\nmax_tokens: 16000\n```\n\nand uncomment:\n```yaml\njudge_model: gemini-2.5\ntemperature: 1.0\nmax_tokens: 32000\n```\n\nRun the command to generate judgments:\n```console\n> python gen_judgment.py\n```\n\nFor Ensemble-as-Judges, we suggest inferencing both judges independently and we will aggregrate the results when displaying leaderboard for you (see step 4).\n\nJudgment caching is also implemented. It will skip generating judgments that has already been generated or lacks one of the model answers.  \n\n### Step 4. Show result\nOutput model win rates for **Arena-Hard-v2.0-Preview (Hard Prompt, Style Control, GPT-4.1 as Judge)**:\n```console\n> python show_result.py --judge-names gpt-4.1 --control-features markdown length\n```\n\nOutput model win rates for **Arena-Hard-v2.0-Preview (Creative Writing, Ensemble GPT-4.1 and Gemini 2.5 as Judges)**:\n```console\n> python show_result.py --judge-names gpt-4.1 gemini-2.5 --category creative_writing\n```\n\n### Step 5. Benchmark Viewer\nYou can review answers and judgment results using our gradio script (`gradio>=5.25.2`).\n```console\n> python qa_browser.py --share\n```\n\n## Style Control\nFollowing the newly introduced Style Control on Chatbot Arena, we release Style Control on Arena Hard Auto! We employ the same Style Control methods as proposed in the [blogpost](https:\u002F\u002Flmsys.org\u002Fblog\u002F2024-08-28-style-control\u002F). Please refer to the blogpost for methodology and technical background.\n\nBefore applying style control, make sure your model answers has proper style attribute generated. Either pull the latest data from [huggingface repo](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Flmarena-ai\u002Farena-hard-auto), or run the following script!\n\nTo add style attribute to your model answers, use `add_markdown_info.py`. The following command takes model answers from `--dir`, append style attributes (token length, number of headers, etc), and save the new answers in `--output-dir`.\n\n```console\n> python add_markdown_info.py --dir data\u002Farena-hard-v0.1\u002Fmodel_answer --output-dir data\u002Farena-hard-v0.1\u002Fmodel_answer\n```\n\nTo control for style (token length and markdown elements), use `--control-features` or `-f` when running `show_result.py`.\n\n```console\n> python show_result.py -f markdown length # style control\n> python show_result.py -f markdown # control for markdown density only\n> python show_result.py -f length # length control only\n```\n\n## Evaluate Benchmarks\nWe outline two key properties that the benchmark aiming to approximate human preference should possess to provide meaningful comparisons between models:\n1. Separability: the benchmark should separate models with high confidence.\n2. Alignment with Human Preference: the benchmark should agree with human preference.\n\nWhile previous works have focused on alignment, separability is also a crucial consideration when comparing models of similar quality (e.g., different checkpoints from the same training run). However, achieving high-confidence separability is challenging due to limitations in prompt design and inherent variances in LLM evaluations. Overly simplistic prompts fail to distinguish between models, while the randomness in human and LLM judgments leads to inconsistent predictions. As a result, it is often difficult to confidently determine if a model’s apparent performance reflects a genuine difference in capability or merely noisy observations, highlighting a need for methods to verify whether a benchmark can reliably separate similar models.\n\nStatistical measures like Pearson (Pearson, 1895) and Spearman Correlations (Spearman, 1961), commonly used in benchmarks such as AlpacaEval (Li et al., 2023) to measure correlation to human preference ranking, may fail to adequately address model separability and ranking instability. In addition, these measures only provide a coarse signal of ranking correlation without quantifying the magnitude of performance differences between model pairs. To address these shortcomings, we develop three novel metrics: **Separability with Confidence**, **Agreement with Confidence**, and **Pair Rank Brier Score**.\n\n**Separability with Confidence** quantifies the benchmark’s confidence by measuring its consistency in predicting the winner of a model pair across random seeds through bootstrapping. This is done by calculating the percentage of model pairs that have non-overlapping confidence intervals of their benchmark scores. A higher percentage indicates that the benchmark is more confident in distinguishing between the performance of different models, as the confidence intervals of their scores do not overlap.\n\nFor **Agreement with Confidence**, and **Pair Rank Brier Score**, please refer to section 3 of our [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11939). The code for calculating these metrics can be found in this [colab notebook](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1ar6XLWREN_dXEh404WNOxroFVUe_4njp). \n\n## Integration with Amazon Bedrock API\nWe have now added the capability to benchmark LLMs hosted on **Amazon Bedrock** with arena-hard. Specifically, we added Amazon Bedrock invoke API in `utils\u002Fcompletion.py` which will allow you to use different models hosted on Amazon Bedrock with Arena-Hard.\n\n\nCurrently we support the following models:\n1. Anthropic Models : Claude 3 Haiku, Claude 3 Sonnet, Claude 3.5 Sonnet, Claude 3 Opus, Claude 3.5 Sonnet v2, Claude 3.7 Sonnet\n2. Mistral Models: Mistral 7B Instruct, Mistral 8x7B Instruct, Mistral Large v1, Mistral Large v2, Mistral Small, Pixtral Large\n3. Meta Llama Models: LLaMA 3 8B Instruct, LLaMA 3 70B Instruct, LLaMA 3.1 8B Instruct, LLaMA 3.1 70B Instruct, LLaMA 3.1 405B Instruct\n   LLaMA 3.2 1B Instruct, LLaMA 3.2 3B Instruct, LLaMA 3.2 11B Instruct, LLaMA 3.2 90B Instruct, LLaMA 2 Chat 13B, LLaMA 2 Chat 70B\n4. Amazon Nova Models: Amazon Nova Lite, Amazon Nova Pro, Amazon Nova Micro, Amazon Nova Premier\n5. DeepSeek-R1\n\nTo **Add a new model hosted on Amazon Bedrock**, you need to update two files: `config\u002Fapi_config.yaml` and `utils\u002Fcompletion.py`.\n\n\n### 1. Update `config\u002Fapi_config.yaml`\n\nDefine a new entry for the model with the correct `model_id`, `api_type`, and generation parameters.\n\n**Example:**\n\n```yaml\naws_nova_light_v1:\n  model: aws_nova_light_v1\n  model_id: us.amazon.nova-lite-v1:0\n  endpoints: null\n  api_type: aws_nova\n  parallel: 8\n  max_tokens: 4096\n  temperature: 0.0\n```\n**Key Fields**\n1. model: Internal alias used for referencing this config.\n2. model_id: Bedrock-specific model identifier.\n3. api_type: The api_type should be registered through `utils\\completion.py`\n4. endpoints: Set to null for default Bedrock endpoint, or override with custom endpoint.\n5. parallel: Controls parallel inference calls (adjust for throughput).\n6. max_tokens: Maximum output tokens.\n7. temperature: Controls randomness of generation (0.0 for deterministic).\n\nFind more example in `config\u002Fapi_config_bedrock_models.yaml`\nRefer to Amazon Bedrock documentation (https:\u002F\u002Fdocs.aws.amazon.com\u002Fbedrock\u002Flatest\u002Fuserguide\u002Fmodels-supported.html) for model IDs and capabilities.\n\n### 2. Register a Model Handler in `utils\u002Fcompletion.py`\nCreate a new function decorated with `@register_api(\"\u003Capi_type>\")` to define how inputs are formatted, sent to Bedrock using boto3, and how the response is parsed.\n\nYou can use existing examples as templates:\n\n    > @register_api(\"aws_llama\") handles LLaMA models\n    > @register_api(\"aws_nova\") handles Nova models\n\nThese functions typically use helpers like `create_llama3_body()` or `create_nova_messages()` and send requests using the Bedrock `invoke_model` API.\n\n**Pay attention to**:\n\n    > The api_type in `api_config.yaml` which must match the name used in the `@register_api(...)` decorator.\n    > Input formatting (e.g., prompt structure, message lists)\n    > Parameter mapping (temperature, max_tokens, model_id)\n    > Response parsing (e.g., generation vs nested output.message.content)\n\nBy following this two-step process, users can easily extend support to any Bedrock-hosted model that follows a compatible invocation structure.\nFor examples, see existing handlers for Claude, LLaMA, and Amazon Nova in the repository.\n\n\n## Community Contribution\n\nFeel free to submit a PR or open up an issue!\n\nIf you want to add your model to the leaderboard, please email me the following:\n1. An OpenAI compatible endpoint to your model.\n2. An OpenAI API key for me to inference judgment.\n\nSorry for the inconvience! Since Arena-Hard-Auto is open data, we want to avoid people cheating on our leaderboard. If we find anything suspicious, we reserve the right to not add your model to our leaderboard.\n\n## Citation\nThe code in this repository is developed from the papers below. Please cite it if you find the repository helpful.\n```\n@article{li2024crowdsourced,\n  title={From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline},\n  author={Li, Tianle and Chiang, Wei-Lin and Frick, Evan and Dunlap, Lisa and Wu, Tianhao and Zhu, Banghua and Gonzalez, Joseph E and Stoica, Ion},\n  journal={arXiv preprint arXiv:2406.11939},\n  year={2024}\n}\n@misc{arenahard2024,\n    title = {From Live Data to High-Quality Benchmarks: The Arena-Hard Pipeline},\n    url = {https:\u002F\u002Flmsys.org\u002Fblog\u002F2024-04-19-arena-hard\u002F},\n    author = {Tianle Li*, Wei-Lin Chiang*, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E. Gonzalez, Ion Stoica},\n    month = {April},\n    year = {2024}\n}\n```\n","\u003Cdiv align=\"center\">\n\n# Arena-Hard-Auto\n\n[![Github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArena--Hard-black?logo=github&logoColor=white&labelColor=black&color=black)](https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-Arena--Hard-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11939) [![Hugging Face Collection](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArena--Hard-fcd022?logo=huggingface&logoColor=000&labelColor)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Flmarena-ai\u002Farena-hard-auto-680998796296d1462c729b6c) [![Twitter](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLMArena--ai-white?logo=X&logoColor=000&color=000&labelColor=white)](https:\u002F\u002Fx.com\u002Flmarena_ai)\n\n\n\u003Cdiv align=\"center\" style=\"font-family: Arial, sans-serif;\">\n  \u003Cp>\n    \u003Ca href=\"#news\" style=\"text-decoration: none; font-weight: bold;\">新闻\u003C\u002Fa> •\n    \u003Ca href=\"#leaderboard\" style=\"text-decoration: none; font-weight: bold;\">排行榜\u003C\u002Fa> •\n    \u003Ca href=\"#install-dependencies\" style=\"text-decoration: none; font-weight: bold;\">安装\u003C\u002Fa> •\n    \u003Ca href=\"#evaluation\" style=\"text-decoration: none; font-weight: bold;\">评估\u003C\u002Fa> •\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flmarena-ai\u002Farena-hard-viewer\" style=\"text-decoration: none; font-weight: bold;\">演示\u003C\u002Fa> •\n    \u003Ca href=\"#citation\" style=\"text-decoration: none; font-weight: bold;\">引用\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n\u003C\u002Fdiv>\n\n# 新闻\n- **[2025年4月23日]** 🎉 **Arena-Hard-v2.0** 终于来了！更好的评判者、新的高难度提示，以及针对创意写作的额外评估。\n- **[2024年10月14日]** 🎉 **风格控制** 现在已支持在 Arena-Hard-Auto 中使用。\n\n## 关于\n\nArena-Hard-Auto 是一个用于指令微调型大语言模型的自动化评估工具。在众多开放式大语言模型基准测试中，Arena-Hard-Auto 与 LMArena（聊天机器人竞技场）的相关性和区分度最高（[参见论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11939)）。如果您想在部署之前了解自己的模型在 LMArena 上的表现如何，我们建议您尝试 Arena-Hard-Auto 的最新评估集——**Arena-Hard-v2.0-预览版**。\n\nV2.0 包含 500 条全新的、具有挑战性的真实用户查询（如开放式软件工程问题、数学题等）以及 250 条来自 Chatbot Arena 的创意写作查询。我们采用自动评判者 GPT-4.1 和 Gemini-2.5，作为更经济、更快速的人类偏好近似工具。\n\n尽管 Arena-Hard-Auto 和 Chatbot Arena 的 Hard 类别（[参见博客](https:\u002F\u002Flmsys.org\u002Fblog\u002F2024-05-17-category-hard\u002F)）都采用了类似的流程来筛选高难度提示，但 Arena-Hard-Auto 使用自动评判者作为更经济、更快速的人类偏好近似工具。请查看 [BenchBuilder](BenchBuilder) 文件夹，获取有关我们如何策划 Arena-Hard-Auto 的代码和资源。在论文中，我们还提出了诸如模型区分度和与人类偏好一致率等指标，用于评估基准测试对模型进行排名的能力（更多信息和代码请参阅 [评估基准](#evaluate-benchmarks)）。\n\n## 排行榜\n\n### Arena-Hard-v2.0-预览版\n\n困难提示、风格控制，以及Gemini 2.5作为评判者 **（官方配置）**：\n```console\n                                      模型  分数 (%)         置信区间 (%)\n0                             o3-2025-04-16        85.9  (-0.8 \u002F +0.9)\n1                   o4-mini-2025-04-16-high        79.1  (-1.4 \u002F +1.2)\n2                                gemini-2.5        79.0  (-2.1 \u002F +1.8)\n3                        o4-mini-2025-04-16        74.6  (-1.8 \u002F +1.6)\n4                          gemini-2.5-flash        68.6  (-1.6 \u002F +1.6)\n5                   o3-mini-2025-01-31-high        66.1  (-1.5 \u002F +2.1)\n6                        o1-2024-12-17-high        61.0  (-2.0 \u002F +2.1)\n7   claude-3-7-sonnet-20250219-thinking-16k        59.8  (-2.0 \u002F +1.8)\n8                           Qwen3-235B-A22B        58.4  (-1.9 \u002F +2.1)\n9                               deepseek-r1        58.0  (-2.2 \u002F +2.0)\n10                            o1-2024-12-17        55.9  (-2.2 \u002F +1.8)\n11                          gpt-4.5-preview        50.0  (-1.9 \u002F +2.0)\n12                       o3-mini-2025-01-31        50.0  (-0.0 \u002F +0.0)\n13                                  gpt-4.1        50.0  (-1.9 \u002F +1.7)\n14                             gpt-4.1-mini        46.9  (-2.4 \u002F +2.1)\n15                                Qwen3-32B        44.5  (-2.2 \u002F +2.1)\n16                                  QwQ-32B        43.5  (-2.5 \u002F +2.1)\n17                            Qwen3-30B-A3B        33.9  (-1.6 \u002F +1.5)\n18               claude-3-5-sonnet-20241022        33.0  (-2.3 \u002F +1.8)\n19                                 s1.1-32B        22.3  (-1.7 \u002F +1.5)\n20           llama4-maverick-instruct-basic        17.2  (-1.5 \u002F +1.2)\n21                           Athene-V2-Chat        16.4  (-1.4 \u002F +1.4)\n22                           gemma-3-27b-it        15.0  (-1.4 \u002F +1.0)\n23                                 Qwen3-4B        15.0  (-1.1 \u002F +1.5)\n24                             gpt-4.1-nano        13.7  (-1.1 \u002F +1.0)\n25       Llama-3.1-Nemotron-70B-Instruct-HF        10.3  (-0.8 \u002F +1.0)\n26                     Qwen2.5-72B-Instruct        10.1  (-0.9 \u002F +1.3)\n27                         OpenThinker2-32B         3.2  (-0.3 \u002F +0.3)\n```\n\n困难提示、风格控制，以及GPT-4.1作为评判者 **（若偏好使用OpenAI API）**\n```console\n                                      模型  分数 (%)         置信区间 (%)\n0                             o3-2025-04-16        87.0  (-1.0 \u002F +1.0)\n1                   o4-mini-2025-04-16-high        81.7  (-1.2 \u002F +1.2)\n2                        o4-mini-2025-04-16        78.0  (-1.3 \u002F +1.4)\n3                   o3-mini-2025-01-31-high        64.8  (-2.1 \u002F +1.9)\n4                        o1-2024-12-17-high        58.7  (-2.3 \u002F +2.1)\n5                                   gpt-4.1        58.3  (-2.0 \u002F +2.3)\n6                             o1-2024-12-17        50.2  (-2.2 \u002F +1.8)\n7                        o3-mini-2025-01-31        50.0  (-0.0 \u002F +0.0)\n8                                gemini-2.5        49.1  (-2.5 \u002F +2.4)\n9                              gpt-4.1-mini        48.6  (-2.7 \u002F +1.9)\n10                              deepseek-r1        48.0  (-2.6 \u002F +2.3)\n11  claude-3-7-sonnet-20250219-thinking-16k        47.0  (-1.9 \u002F +2.3)\n12                          Qwen3-235B-A22B        46.7  (-1.9 \u002F +2.4)\n13                         gemini-2.5-flash        45.1  (-2.7 \u002F +2.1)\n14                          gpt-4.5-preview        43.0  (-1.9 \u002F +2.2)\n15                                  QwQ-32B        36.1  (-2.0 \u002F +2.2)\n16                                Qwen3-32B        35.8  (-2.1 \u002F +2.2)\n17                            Qwen3-30B-A3B        28.7  (-1.4 \u002F +2.1)\n18               claude-3-5-sonnet-20241022        25.8  (-1.7 \u002F +1.8)\n19                                 s1.1-32B        18.3  (-2.3 \u002F +2.2)\n20                             gpt-4.1-nano        15.4  (-1.1 \u002F +1.2)\n21                           Athene-V2-Chat        12.6  (-1.2 \u002F +1.3)\n22                                 Qwen3-4B        12.6  (-1.1 \u002F +1.5)\n23           llama4-maverick-instruct-basic        12.0  (-1.0 \u002F +1.2)\n24                           gemma-3-27b-it         9.7  (-0.9 \u002F +1.1)\n25                     Qwen2.5-72B-Instruct         8.0  (-0.7 \u002F +0.9)\n26       Llama-3.1-Nemotron-70B-Instruct-HF         6.8  (-0.6 \u002F +0.8)\n27                         OpenThinker2-32B         2.3  (-0.2 \u002F +0.3)\n```\n\n创意写作，由GPT-4.1和Gemini 2.5共同担任评委 **（最适合创意写作的配置）**\n```console\n                                      模型  分数 (%)         置信区间 (%)\n0                                gemini-2.5        90.8  (-1.2 \u002F +1.3)\n1                             o3-2025-04-16        88.8  (-1.1 \u002F +1.0)\n2                          gemini-2.5-flash        83.9  (-1.3 \u002F +1.4)\n3                               deepseek-r1        77.0  (-2.0 \u002F +1.4)\n4                           Qwen3-235B-A22B        73.5  (-1.8 \u002F +1.5)\n5                            gemma-3-27b-it        69.9  (-1.9 \u002F +1.7)\n6   claude-3-7-sonnet-20250219-thinking-16k        63.9  (-1.7 \u002F +1.9)\n7                                   gpt-4.1        61.5  (-1.9 \u002F +1.9)\n8                                   QwQ-32B        60.9  (-2.0 \u002F +1.6)\n9                        o1-2024-12-17-high        59.9  (-2.1 \u002F +1.7)\n10                  o4-mini-2025-04-16-high        58.7  (-1.8 \u002F +1.9)\n11                            o1-2024-12-17        56.6  (-1.8 \u002F +1.8)\n12                       o4-mini-2025-04-16        55.6  (-1.8 \u002F +2.0)\n13                                Qwen3-32B        53.3  (-1.9 \u002F +1.6)\n14                          gpt-4.5-preview        51.4  (-1.9 \u002F +2.0)\n15                     gemini-2.0-flash-001        50.0  (-0.0 \u002F +0.0)\n16                  o3-mini-2025-01-31-high        43.0  (-1.7 \u002F +2.1)\n17                            Qwen3-30B-A3B        34.9  (-2.0 \u002F +1.6)\n18                             gpt-4.1-mini        28.2  (-1.8 \u002F +1.8)\n19       Llama-3.1-Nemotron-70B-Instruct-HF        26.9  (-2.0 \u002F +1.8)\n20               claude-3-5-sonnet-20241022        24.2  (-1.5 \u002F +1.5)\n21                         OpenThinker2-32B        23.6  (-1.5 \u002F +1.3)\n22                           Athene-V2-Chat        18.1  (-1.6 \u002F +1.5)\n23                                 Qwen3-4B        13.2  (-1.2 \u002F +1.2)\n24                             gpt-4.1-nano        10.7  (-1.1 \u002F +1.1)\n25           llama4-maverick-instruct-basic        10.5  (-1.1 \u002F +1.0)\n26                     Qwen2.5-72B-Instruct        10.2  (-1.1 \u002F +1.1)\n27                                 s1.1-32B         8.2  (-0.9 \u002F +0.\n```\n\n如需查看较早的排行榜，例如Arena-Hard-v0.1，请参阅 [past-leaderboards](\u002Fmisc\u002Fpast_leaderboards.md)。\n\n## 安装依赖\n```\ngit clone https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto.git\ncd arena-hard\npip install -r requirements.txt\npip install -r requirements-optional.txt  # 可选依赖（例如Anthropic SDK）\n```\n\n## 下载数据集\n我们预先生成了许多热门模型的回答和评判结果。您可以通过在线[演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flmarena-ai\u002Farena-hard-viewer)浏览这些结果，或者在安装了[`git-lfs`](https:\u002F\u002Fgit-lfs.com)的情况下，通过以下命令下载：\n```console\n> git lfs install\n> git clone git@hf.co:datasets\u002Flmarena-ai\u002Farena-hard-auto arena-hard-data\n\u002F\u002F 将 answers\u002Fjudgments 复制到 data 目录\n> cp -r arena-hard-data\u002Fdata .\n```\n\n然后运行：\n```console\n> python show_result.py\n                                      模型  得分 (%)         置信区间 (%)\n0                             o3-2025-04-16        87.6  (-0.8 \u002F +1.0)\n1                   o4-mini-2025-04-16-high        82.7  (-1.4 \u002F +1.3)\n2                        o4-mini-2025-04-16        78.9  (-1.6 \u002F +1.6)\n```\n\n## 评估\n\n### 第一步：配置您的模型端点\n在`config\u002Fapi_config.yaml`中填写您的API端点。我们支持与OpenAI兼容的API服务器、Anthropic、Vertex AI等。您可以在`config\u002Fapi_config.yaml`中找到相关示例。\n\n您可以使用诸如[vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest\u002Fserving\u002Fopenai_compatible_server.html)或[SGLang](https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang?tab=readme-ov-file#using-local-models)之类的推理引擎来托管您的模型，并提供与OpenAI兼容的API服务。\n\n我们还内置了对SGLang快速推理的支持，相关示例见`config\u002Fapi_config.yaml`，实现代码位于`utils\u002Fcompletion.py`中。环境搭建请参考`misc\u002Fsglang_setup.bash`。\n\n### 第二步：生成模型回答\n在`config\u002Fgen_answer_config.yaml`中，在`model_list`里添加您的模型名称。\n\n运行以下命令生成回答：\n```console\n> python gen_answer.py\n```\n\n系统实现了缓存功能。当针对同一提示已有现成的回答或评判时，代码会跳过重新生成步骤（此功能不适用于内置的SGLang服务器）。\n\n### 第三步：生成评判\n在`config\u002Farena-hard-v2.0.yaml`中，在`model_list`里添加您的模型名称。\n```yaml\n...\n# 在下方添加您的模型以进行评估\nmodel_list:\n  - deepseek-r1\n  - [YOUR-MODEL-NAME]\n```\n\n我们推荐使用GPT-4.1作为裁判，以实现快速且稳定的评判推理。若要使用Gemini-2.5，请注释掉以下内容：\n```yaml\njudge_model: gpt-4.1\ntemperature: 0.0\nmax_tokens: 16000\n```\n\n并取消注释：\n```yaml\njudge_model: gemini-2.5\ntemperature: 1.0\nmax_tokens: 32000\n```\n\n运行以下命令生成评判：\n```console\n> python gen_judgment.py\n```\n\n对于“集成裁判”方案，我们建议分别独立地调用两位裁判进行推理，并在展示排行榜时汇总结果（参见第4步）。\n\n评判结果也支持缓存。如果某条评判已存在，或缺少其中一位模型的回答，则会跳过该条目的生成。\n\n### 第四步：展示结果\n输出**Arena-Hard-v2.0-预览版（高难度提示、风格控制、GPT-4.1为裁判）**的模型胜率：\n```console\n> python show_result.py --judge-names gpt-4.1 --control-features markdown length\n```\n\n输出**Arena-Hard-v2.0-预览版（创意写作、GPT-4.1和Gemini 2.5联合裁判）**的模型胜率：\n```console\n> python show_result.py --judge-names gpt-4.1 gemini-2.5 --category creative_writing\n```\n\n### 第五步：基准评测查看器\n您可以通过我们的Gradio脚本（`gradio>=5.25.2`）查看回答和评判结果：\n```console\n> python qa_browser.py --share\n```\n\n## 风格控制\n继Chatbot Arena引入风格控制之后，我们现在也在Arena Hard Auto中推出了风格控制！我们采用了与[博客文章](https:\u002F\u002Flmsys.org\u002Fblog\u002F2024-08-28-style-control\u002F)中相同的风格控制方法。有关方法论和技术背景，请参阅该博客文章。\n\n在应用风格控制之前，请确保您的模型回答已生成适当的风格属性。您可以从[Hugging Face仓库](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Flmarena-ai\u002Farena-hard-auto)拉取最新数据，或者运行以下脚本！\n\n要为您的模型回答添加风格属性，请使用`add_markdown_info.py`。以下命令会从`--dir`目录读取模型回答，附加风格属性（token长度、标题数量等），并将新回答保存到`--output-dir`目录。\n```console\n> python add_markdown_info.py --dir data\u002Farena-hard-v0.1\u002Fmodel_answer --output-dir data\u002Farena-hard-v0.1\u002Fmodel_answer\n```\n\n要控制风格（token长度和Markdown元素），请在运行`show_result.py`时使用`--control-features`或`-f`选项。\n\n```console\n> python show_result.py -f markdown length # 风格控制\n> python show_result.py -f markdown # 仅控制Markdown密度\n> python show_result.py -f length # 仅控制长度\n```\n\n## 评估基准测试\n我们概述了旨在近似人类偏好的基准测试应具备的两项关键属性，以提供模型之间的有意义比较：\n1. 可区分性：基准测试应能高置信度地区分不同模型。\n2. 与人类偏好的一致性：基准测试应与人类偏好保持一致。\n\n尽管以往的研究主要关注一致性，但在比较质量相近的模型时（例如来自同一训练运行的不同检查点），可区分性同样至关重要。然而，由于提示设计的局限性以及大语言模型评估中固有的变异性，实现高置信度的可区分性颇具挑战。过于简单的提示无法有效区分不同模型，而人类和大语言模型判断中的随机性则会导致预测结果不一致。因此，通常很难确定模型的表面表现是否反映了其能力的真实差异，还是仅仅是噪声观测的结果，这凸显了需要开发方法来验证基准测试能否可靠地区分相似模型。\n\n在 AlpacaEval（Li 等，2023）等基准测试中常用的统计指标，如皮尔逊相关系数（Pearson, 1895）和斯皮尔曼相关系数（Spearman, 1961），用于衡量与人类偏好排序的相关性，但可能无法充分解决模型可区分性和排序稳定性问题。此外，这些指标仅提供排序相关性的粗略信号，而无法量化模型对之间性能差异的大小。为解决这些问题，我们开发了三项新指标：**置信度下的可区分性**、**置信度下的一致性**和**成对排名布里尔评分**。\n\n**置信度下的可区分性**通过自助法计算基准测试在不同随机种子下预测模型对胜负的一致性，从而量化基准测试的置信度。具体而言，该指标计算基准测试得分的置信区间互不重叠的模型对所占的比例。比例越高，表明基准测试越有信心区分不同模型的性能，因为它们的得分置信区间没有重叠。\n\n关于**置信度下的一致性**和**成对排名布里尔评分**，请参阅我们的[论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.11939)第3节。计算这些指标的代码可在本[Colab 笔记本](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1ar6XLWREN_dXEh404WNOxroFVUe_4njp)中找到。\n\n## 与 Amazon Bedrock API 的集成\n我们现在已将 arena-hard 扩展至支持在 **Amazon Bedrock** 上托管的大语言模型的基准测试。具体来说，在 `utils\u002Fcompletion.py` 中添加了 Amazon Bedrock 调用 API，使您能够使用 Arena-Hard 测试 Amazon Bedrock 上托管的各种模型。\n\n目前我们支持以下模型：\n1. Anthropic 模型：Claude 3 Haiku、Claude 3 Sonnet、Claude 3.5 Sonnet、Claude 3 Opus、Claude 3.5 Sonnet v2、Claude 3.7 Sonnet\n2. Mistral 模型：Mistral 7B Instruct、Mistral 8x7B Instruct、Mistral Large v1、Mistral Large v2、Mistral Small、Pixtral Large\n3. Meta Llama 模型：LLaMA 3 8B Instruct、LLaMA 3 70B Instruct、LLaMA 3.1 8B Instruct、LLaMA 3.1 70B Instruct、LLaMA 3.1 405B Instruct、LLaMA 3.2 1B Instruct、LLaMA 3.2 3B Instruct、LLaMA 3.2 11B Instruct、LLaMA 3.2 90B Instruct、LLaMA 2 Chat 13B、LLaMA 2 Chat 70B\n4. Amazon Nova 模型：Amazon Nova Lite、Amazon Nova Pro、Amazon Nova Micro、Amazon Nova Premier\n5. DeepSeek-R1\n\n要**添加一个托管在 Amazon Bedrock 上的新模型**，您需要更新两个文件：`config\u002Fapi_config.yaml` 和 `utils\u002Fcompletion.py`。\n\n### 1. 更新 `config\u002Fapi_config.yaml`\n\n为该模型定义一个新的条目，填写正确的 `model_id`、`api_type` 和生成参数。\n\n**示例：**\n\n```yaml\naws_nova_light_v1:\n  model: aws_nova_light_v1\n  model_id: us.amazon.nova-lite-v1:0\n  endpoints: null\n  api_type: aws_nova\n  parallel: 8\n  max_tokens: 4096\n  temperature: 0.0\n```\n\n**关键字段：**\n1. model：用于引用此配置的内部别名。\n2. model_id：Bedrock 特定的模型标识符。\n3. api_type：`api_type` 应通过 `utils\u002Fcompletion.py` 注册。\n4. endpoints：对于默认的 Bedrock 端点设置为 null，或使用自定义端点覆盖。\n5. parallel：控制并行推理调用次数（根据吞吐量调整）。\n6. max_tokens：最大输出标记数。\n7. temperature：控制生成的随机性（0.0 表示确定性）。\n\n更多示例请参见 `config\u002Fapi_config_bedrock_models.yaml`。有关模型 ID 和功能，请参考 Amazon Bedrock 文档（https:\u002F\u002Fdocs.aws.amazon.com\u002Fbedrock\u002Flatest\u002Fuserguide\u002Fmodels-supported.html）。\n\n### 2. 在 `utils\u002Fcompletion.py` 中注册模型处理器\n创建一个带有 `@register_api(\"\u003Capi_type>\")` 装饰器的新函数，定义如何格式化输入、使用 boto3 将请求发送到 Bedrock，以及如何解析响应。\n\n您可以使用现有示例作为模板：\n\n    > @register_api(\"aws_llama\") 处理 LLaMA 模型\n    > @register_api(\"aws_nova\") 处理 Nova 模型\n\n这些函数通常会使用诸如 `create_llama3_body()` 或 `create_nova_messages()` 等辅助函数，并通过 Bedrock 的 `invoke_model` API 发送请求。\n\n**请注意：**\n\n    > `api_config.yaml` 中的 `api_type` 必须与 `@register_api(...)` 装饰器中使用的名称一致。\n    > 输入格式化（例如提示结构、消息列表）\n    > 参数映射（温度、max_tokens、model_id）\n    > 响应解析（例如生成内容与嵌套输出.message.content）\n\n遵循上述两步流程，用户可以轻松扩展对任何符合兼容调用结构的 Bedrock 托管模型的支持。有关示例，请参阅仓库中现有的 Claude、LLaMA 和 Amazon Nova 处理程序。\n\n## 社区贡献\n欢迎提交 PR 或打开议题！\n\n如果您希望将自己的模型加入排行榜，请将以下信息发送给我：\n1. 您模型的 OpenAI 兼容端点。\n2. 用于我进行推理判断的 OpenAI API 密钥。\n\n由此带来的不便敬请谅解！由于 Arena-Hard-Auto 是开放数据，我们希望避免有人在排行榜上作弊。若发现任何可疑行为，我们保留不将您的模型加入排行榜的权利。\n\n## 引用\n本仓库中的代码基于以下论文开发而成。如果您觉得本仓库有所帮助，请引用：\n```\n@article{li2024crowdsourced,\n  title={从众包数据到高质量基准：Arena-Hard 和 BenchBuilder 流水线},\n  author={李天乐、蒋伟霖、弗里克·埃文、邓拉普·丽莎、吴天浩、朱邦华、冈萨雷斯·约瑟夫·E、斯托伊卡·伊昂},\n  journal={arXiv 预印本 arXiv:2406.11939},\n  year={2024}\n}\n@misc{arenahard2024,\n    title = {从实时数据到高质量基准：Arena-Hard 流水线},\n    url = {https:\u002F\u002Flmsys.org\u002Fblog\u002F2024-04-19-arena-hard\u002F},\n    author = {李天乐*、蒋伟霖*、弗里克·埃文、邓拉普·丽莎、朱邦华、约瑟夫·E·冈萨雷斯、伊昂·斯托伊卡},\n    month = {四月},\n    year = {2024}\n}\n```","# Arena-Hard-Auto 快速上手指南\n\nArena-Hard-Auto 是一个用于评估指令微调大语言模型（LLM）的自动化工具。它在开放式 LLM 基准测试中与 LMArena (Chatbot Arena) 具有最高的相关性和区分度，适合在部署前快速验证模型在真实世界高难度问题上的表现。\n\n## 环境准备\n\n*   **系统要求**：Linux 或 macOS 环境，推荐具备 GPU 以加速本地推理（若使用 API 则非必须）。\n*   **前置依赖**：\n    *   Python 3.8+\n    *   `git` 和 `git-lfs` (用于下载数据集)\n    *   pip 包管理器\n*   **API 密钥**：若使用云端模型作为裁判（Judge）或被测模型，需准备相应的 API Key（如 OpenAI, Anthropic, Google Vertex AI 等）。\n\n## 安装步骤\n\n1.  **克隆仓库**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto.git\n    cd arena-hard-auto\n    ```\n\n2.  **安装依赖**\n    建议配置国内 pip 镜像源（如清华源）以加速安装：\n    ```bash\n    pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    pip install -r requirements-optional.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    *注：`requirements-optional.txt` 包含 Anthropic SDK 等可选依赖，按需安装。*\n\n3.  **下载数据集**\n    需要安装 `git-lfs` 并拉取预生成的模型回答和评判数据：\n    ```bash\n    git lfs install\n    git clone git@hf.co:datasets\u002Flmarena-ai\u002Farena-hard-auto arena-hard-data\n    cp -r arena-hard-data\u002Fdata .\n    ```\n    *若无法访问 HuggingFace，可尝试通过镜像站下载或使用代理。*\n\n## 基本使用\n\n以下流程展示如何评估一个自定义模型（假设该模型已通过兼容 OpenAI 格式的 API 提供服务，例如使用 vLLM 或 SGLang 部署）。\n\n### 第一步：配置 API 端点\n\n编辑 `config\u002Fapi_config.yaml`，填入你的模型服务地址和 API Key。支持 OpenAI 兼容接口、Anthropic 等。\n\n```yaml\n# config\u002Fapi_config.yaml 示例\nopenai_api_key: \"sk-your-key\"\nopenai_api_base: \"http:\u002F\u002Flocalhost:8000\u002Fv1\" # 替换为你的本地或远程服务地址\n# 其他配置参考文件内注释\n```\n\n### 第二步：生成模型回答\n\n编辑 `config\u002Fgen_answer_config.yaml`，在 `model_list` 中添加你的模型名称。\n\n```yaml\n# config\u002Fgen_answer_config.yaml\nmodel_list:\n  - deepseek-r1\n  - your-model-name  # 替换为你的模型名称\n```\n\n运行生成命令：\n```bash\npython gen_answer.py\n```\n*工具会自动跳过已存在的缓存结果。*\n\n### 第三步：生成评判结果 (Judgments)\n\n编辑评测配置文件（如 `config\u002Farena-hard-v2.0.yaml`），确保 `model_list` 中包含待测模型。\n\n```yaml\n# config\u002Farena-hard-v2.0.yaml\nmodel_list:\n  - deepseek-r1\n  - your-model-name\n```\n\n*默认使用 GPT-4.1 作为裁判。若需使用 Gemini-2.5，请在配置文件中取消相应注释。*\n\n运行评判命令：\n```bash\npython gen_judgment.py\n```\n\n### 第四步：查看结果\n\n运行脚本查看最终得分和置信区间：\n```bash\npython show_result.py\n```\n\n输出示例：\n```text\n                                      Model  Scores (%)         CI (%)\n0                             o3-2025-04-16        87.6  (-0.8 \u002F +1.0)\n1                   your-model-name        XX.X  (-X.X \u002F +X.X)\n```","某 AI 初创团队在发布自研大模型前，急需验证其在真实复杂场景下的表现，以预测用户在 Chatbot Arena 中的偏好排名。\n\n### 没有 arena-hard-auto 时\n- **评估成本高昂**：依赖人工标注或众包平台对数百个高难度提示词进行打分，耗时数周且费用昂贵，严重拖慢迭代节奏。\n- **反馈周期滞后**：模型微调后需等待漫长的人工评估结果才能知晓效果，导致开发团队无法快速试错和优化。\n- **相关性存疑**：使用的静态基准测试（如 MMLU）分数虽高，但与真实用户投票产生的 LMArena 排行榜相关性低，出现“刷榜”却不受用户欢迎的尴尬局面。\n- **缺乏风格控制**：难以量化评估模型在特定写作风格或指令遵循上的细微差别，只能凭主观感觉调整。\n\n### 使用 arena-hard-auto 后\n- **自动化高效评测**：利用 GPT-4.1 和 Gemini-2.5 作为自动裁判，几分钟内即可完成对 500+ 个高难度真实用户查询的评估，成本降低九成以上。\n- **即时迭代闭环**：每次代码提交后自动运行评测，团队能立即看到模型在“硬提示词”集上的得分变化，实现天级别的快速迭代。\n- **精准预测排名**：凭借与 LMArena 人类偏好高度相关的特性，团队能在模型上线前准确预判其市场竞争力，避免无效部署。\n- **细粒度风格调优**：借助新增的风格控制功能，针对性地优化模型在创意写作或工程代码领域的表现，显著提升用户满意度。\n\narena-hard-auto 将原本昂贵滞后的人类偏好评估转化为低成本、高相关性的自动化流程，成为大模型上线前不可或缺的“实战演习场”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flmarena_arena-hard-auto_fefbbbec.png","lmarena","Arena","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flmarena_d4840afd.jpg","Where AI meets the real world. Formerly LMArena. We measure and advance the frontier of AI through community-driven evaluation.",null,"arena","https:\u002F\u002Farena.ai\u002Fblog\u002F","https:\u002F\u002Fgithub.com\u002Flmarena",[23,27],{"name":24,"color":25,"percentage":26},"Python","#3572A5",99.8,{"name":28,"color":29,"percentage":30},"Shell","#89e051",0.2,1015,150,"2026-04-16T12:47:15","Apache-2.0",2,"未说明","非必需。若使用本地模型推理（如 SGLang），需根据模型大小配置相应 GPU；若仅调用 API（OpenAI\u002FAnthropic\u002FVertex AI）则无需本地 GPU。",{"notes":39,"python":36,"dependencies":40},"该工具主要作为自动评估框架，默认通过 API 调用外部模型（如 GPT-4.1, Gemini-2.5）作为裁判或被测模型，因此对本地硬件无硬性要求。若需本地部署被测模型或裁判模型，需自行配置对应的推理引擎（如 vLLM 或 SGLang）及相应显存。运行前需安装 git-lfs 以下载预生成的答案和评判数据。",[41,42,43,44,45],"requirements.txt 中定义的依赖","requirements-optional.txt (可选，如 anthropic sdk)","vLLM (可选，用于托管本地模型)","SGLang (可选，用于内置快速推理)","git-lfs (用于下载数据集)",[47,48],"语言模型","其他","ready","2026-03-27T02:49:30.150509","2026-04-19T15:40:01.656643",[53,58,63,68,73,78],{"id":54,"question_zh":55,"answer_zh":56,"source_url":57},42577,"运行 show_result.py 时出现逻辑回归不收敛的警告（STOP: TOTAL NO. of ITERATIONS REACHED LIMIT），这会影响结果吗？","通常可以忽略此警告。出现该问题的原因可能是当某些模型样本不足时，逻辑回归问题病态（ill-conditioned），导致求解器需要更多迭代次数才能收敛。维护者指出，只要基准模型固定，胜率或 Elo 评级通常不会受批次中模型数量的影响。如果必须解决，可以尝试增加迭代次数、对数据进行缩放，或在优化初始化时在每对模型之间引入微小的加权“平局”虚拟比赛以确保完全连接的胜率图。","https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto\u002Fissues\u002F27",{"id":59,"question_zh":60,"answer_zh":61,"source_url":62},42578,"如何将本地部署的模型配置为裁判模型（Judge Model）？","如果在生成回答时本地模型工作正常，但设置为裁判时报错（如 NotFoundError），通常需要修改代码。具体解决方案是在 `gen_judgment.py` 文件中（约第 100 行），将变量 `model` 替换为 `endpoint_info[\"model_name\"]`。这样可以让程序正确读取本地端点配置的模型名称，而不是尝试从远程 API 查找。","https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto\u002Fissues\u002F20",{"id":64,"question_zh":65,"answer_zh":66,"source_url":67},42579,"是否有开源模型可以替代 GPT-4 作为裁判模型以降低成本？","是的，OpenCompass 团队提供了一系列微调后的裁判模型（CompassJudger），包括 32B、14B、7B 和 1.5B 版本。实验表明，CJ-1-14B 和 CJ-1-32B 在 ArenaHard 数据集上与 GPT-4o-0806 的裁判相关性超过 95%，可作为低成本的替代方案。这些模型托管在 Hugging Face 上（例如 opencompass\u002FCompassJudger-1-32B-Instruct）。","https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto\u002Fissues\u002F49",{"id":69,"question_zh":70,"answer_zh":71,"source_url":72},42580,"切换不同版本的 GPT 裁判模型（如从 gpt-4-1106 切换到 gpt-4-0125）会导致评分出现巨大差异吗？","是的，不同版本的裁判模型可能会导致评分出现显著差异（例如有用户观察到超过 440 分的差距）。这是因为不同版本的模型在判断偏好上存在偏差。官方正在致力于减少 LLM 裁判的偏差，并计划在未来版本（如 Arena-Hard-v0.2）中引入新的裁判模型来解决一致性问题。建议在比较结果时尽量保持裁判模型版本一致。","https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto\u002Fissues\u002F16",{"id":74,"question_zh":75,"answer_zh":76,"source_url":77},42581,"是否支持设置除 temperature 以外的其他生成采样参数（如 repetition_penalty）？","目前原生配置主要支持 `temperature`。对于其他参数，社区建议模型创作者应在 `generation_config.json` 中设定良好的默认值。如果模型未指定解码超参数，用户可以参考 OpenCompass 等集成方案，它们支持指定贪婪解码或采样参数，并可使用 VLLM 或 LMdeploy 等加速器来加速推理。此外，设置 `repetition_penalty` 通常被认为是有帮助的。","https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto\u002Fissues\u002F9",{"id":79,"question_zh":80,"answer_zh":81,"source_url":82},42582,"在评估 Qwen3 等具有“思考模式”的模型时，得分与官方报告不符怎么办？","得分差异通常源于是否启用了模型的“思考模式”（think mode）。官方报告的分数可能基于特定的推理模式。在使用 vllm serve 部署时，需确保使用正确的 chat-template（如 `qwen3_nonthinking.jinja` 或对应的 thinking 模板）以及匹配的参数配置。如果关闭思考模式，应确保温度（temperature）设置为 0.0 并使用对应的非思考模板，以复现官方基准中的特定设置。","https:\u002F\u002Fgithub.com\u002Flmarena\u002Farena-hard-auto\u002Fissues\u002F69",[],[85,95,105,113,121,133],{"id":86,"name":87,"github_repo":88,"description_zh":89,"stars":90,"difficulty_score":35,"last_commit_at":91,"category_tags":92,"status":49},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160411,"2026-04-18T23:33:24",[93,94,47],"开发框架","Agent",{"id":96,"name":97,"github_repo":98,"description_zh":99,"stars":100,"difficulty_score":101,"last_commit_at":102,"category_tags":103,"status":49},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[47,104,94,93],"图像",{"id":106,"name":107,"github_repo":108,"description_zh":109,"stars":110,"difficulty_score":35,"last_commit_at":111,"category_tags":112,"status":49},8553,"spec-kit","github\u002Fspec-kit","Spec Kit 是一款专为提升软件开发效率而设计的开源工具包，旨在帮助团队快速落地“规格驱动开发”（Spec-Driven Development）模式。传统开发中，需求文档往往与代码实现脱节，导致沟通成本高且结果不可控；而 Spec Kit 通过将规格说明书转化为可执行的指令，让 AI 直接依据明确的业务场景生成高质量代码，从而减少从零开始的随意编码，确保产出结果的可预测性。\n\n该工具特别适合希望利用 AI 辅助编程的开发者、技术负责人及初创团队。无论是启动全新项目还是在现有工程中引入规范化流程，用户只需通过简单的命令行操作，即可初始化项目并集成主流的 AI 编程助手。其核心技术亮点在于“规格即代码”的理念，支持社区扩展与预设模板，允许用户根据特定技术栈定制开发流程。此外，Spec Kit 强调官方维护的安全性，提供稳定的版本管理，帮助开发者在享受 AI 红利的同时，依然牢牢掌握架构设计的主动权，真正实现从“凭感觉写代码”到“按规格建系统”的转变。",88749,"2026-04-17T09:48:14",[47,104,94,93],{"id":114,"name":115,"github_repo":116,"description_zh":117,"stars":118,"difficulty_score":35,"last_commit_at":119,"category_tags":120,"status":49},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[93,47],{"id":122,"name":123,"github_repo":124,"description_zh":125,"stars":126,"difficulty_score":35,"last_commit_at":127,"category_tags":128,"status":49},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85267,"2026-04-18T11:00:28",[104,129,130,131,94,48,47,93,132],"数据工具","视频","插件","音频",{"id":134,"name":135,"github_repo":136,"description_zh":137,"stars":138,"difficulty_score":139,"last_commit_at":140,"category_tags":141,"status":49},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[47,129,48]]