[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Vchitect--VBench":3,"tool-Vchitect--VBench":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,2,"2026-04-08T11:23:26",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[43,15,13,14],"语言模型",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,52],"视频",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":59,"last_commit_at":60,"category_tags":61,"status":17},5646,"opencv","opencv\u002Fopencv","OpenCV 是一个功能强大的开源计算机视觉库，被誉为机器视觉领域的“瑞士军刀”。它主要解决让计算机“看懂”图像和视频的核心难题，提供了从基础的图像读取、色彩转换、边缘检测，到复杂的人脸识别、物体追踪、3D 重建及深度学习模型部署等全方位算法支持。无论是处理静态图片还是分析实时视频流，OpenCV 都能高效完成特征提取与模式识别任务。\n\n这款工具特别适合计算机视觉开发者、人工智能研究人员以及机器人工程师使用。对于希望将视觉感知能力集成到应用中的软件工程师，或是需要快速验证算法原型的学术研究者，OpenCV 都是不可或缺的基础设施。虽然普通用户通常不会直接操作代码，但日常生活中使用的扫码支付、美颜相机和自动驾驶系统，背后往往都有它的身影。\n\nOpenCV 的独特亮点在于其卓越的性能与广泛的兼容性。它采用 C++ 编写以确保高速运算，同时提供 Python、Java 等多种语言接口，极大降低了开发门槛。库中内置了数千种优化算法，并支持跨平台运行，能够无缝对接各类硬件加速器。作为社区驱动的项目，OpenCV 拥有活跃的生态系统和丰富的学习资源，持续推动着视觉技术的前沿发展。",86988,1,"2026-04-08T16:06:22",[14,15],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":10,"env_os":92,"env_gpu":92,"env_ram":92,"env_deps":93,"category_tags":97,"github_topics":99,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":108,"updated_at":109,"faqs":110,"releases":145},6037,"Vchitect\u002FVBench","VBench","[CVPR2024 Highlight] VBench - We Evaluate Video Generation","VBench 是一套专为视频生成模型打造的综合评估基准工具，旨在解决当前 AI 视频领域缺乏统一、客观评价标准的难题。随着各类视频生成模型层出不穷，如何科学地衡量它们在画面质量、动作流畅度、语义一致性等多维度的表现成为行业痛点。VBench 通过构建涵盖多个关键维度的评估体系，结合自动化指标与人类偏好对齐机制，为不同模型提供公平、全面的“考试成绩单”。\n\n该项目不仅收录了丰富的测试提示词套件和采样视频数据，还提供了从安装到运行的一站式代码实现，支持研究人员快速复现对比实验。其独特亮点在于不断迭代升级（如 VBench-2.0），引入了更贴近真实应用场景的评估维度，并开放了在线排行榜和互动竞技场，让用户能直观查看各模型生成的实际视频效果。\n\nVBench 特别适合人工智能研究人员、算法开发者以及关注视频生成技术进展的产品团队使用。无论是想要验证新模型的性能，还是希望了解行业最新水平，VBench 都能提供可靠的数据支撑和分析视角。作为一个开源项目，它正逐步成为视频生成领域不可或缺的评估基础设施，推动整个社区向更高标准迈进。","![vbench_logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_9c7b090edd87.jpg)\n\n\n\u003C!-- [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2311.99999-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.99999) -->\n[![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n[![VBench Arena (View Generated Videos Here!)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Video_Arena)\n[![VBench-2.0 Arena (View Generated Videos Here!)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench2.0%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench2.0_Video_Arena)\n[![Project Page](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-Website-green?logo=googlechrome&logoColor=green)](https:\u002F\u002Fvchitect.github.io\u002FVBench-project\u002F)\n[![Project Page](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench%202.0-Website-green?logo=googlechrome&logoColor=green)](https:\u002F\u002Fvchitect.github.io\u002FVBench-2.0-project\u002F)\n[![Dataset Download](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDataset-Download-red?logo=googlechrome&logoColor=red)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1on66fnZ8atRoLDimcAXMxSwRxqN8_0yS?usp=sharing)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fvbench)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fvbench\u002F)\n[![Video](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-Video-c4302b?logo=youtube&logoColor=red)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7IhCC8Qqn8Y)\n[![Video](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench%202.0-Video-c4302b?logo=youtube&logoColor=red)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kJrzKy9tgAc)\n![Visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_9f71ff971f14.png)\n\n\nThis repository provides unified implementations for the **VBench series** of works, supporting comprehensive evaluation of video generative models across a wide spectrum of capabilities and settings.\n\nIf your questions are not addressed in this README, please contact Ziqi Huang at ZIQI002 [at] e [dot] ntu [dot] edu [dot] sg.\n\n## Table of Contents\n- [Overview](#overview) - *See this section for component locations and the differences between VBench, VBench++, and VBench-2.0.*\n- [Updates](#updates)\n- [Evaluation Results](#evaluation_results)\n- [Video Generation Models Info](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Prompt Suite](#prompt_suite)\n- [Sampled Videos](#sampled_videos)\n- [Evaluation Method Suite](#evaluation_method_suite)\n- [Citation and Acknowledgement](#citation_and_acknowledgement)\n\n\n\u003Ca name=\"overview\">\u003C\u002Fa>\n## :mega: Overview\n\nThis repository provides unified implementations for the **VBench series** of works, supporting comprehensive evaluation of video generative models across a wide spectrum of capabilities and settings.\n\n### (1) VBench\n\n***TL;DR: Evaluating Video Generation — Benchmark • Evaluation Dimensions • Evaluation Methods • Human Alignment • Insights***\n\n> [![VBench Paper (CVPR 2024)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-CVPR%202024-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17982) **VBench: Comprehensive Benchmark Suite for Video Generative Models**  \u003Cbr>\n> [Ziqi Huang](https:\u002F\u002Fziqihuangg.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [Yinan He](https:\u002F\u002Fgithub.com\u002Fyinanhe)\u003Csup>∗\u003C\u002Fsup>, [Jiashuo Yu](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=iH0Aq0YAAAAJ&hl=zh-CN)\u003Csup>∗\u003C\u002Fsup>, [Fan Zhang](https:\u002F\u002Fgithub.com\u002Fzhangfan-p)\u003Csup>∗\u003C\u002Fsup>, [Chenyang Si](https:\u002F\u002Fchenyangsi.top\u002F), [Yuming Jiang](https:\u002F\u002Fyumingj.github.io\u002F), [Yuanhan Zhang](https:\u002F\u002Fzhangyuanhan-ai.github.io\u002F),  [Tianxing Wu](https:\u002F\u002Ftianxingwu.github.io\u002F), [Qingyang Jin](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench), [Nattapol Chanpaisit](https:\u002F\u002Fnattapolchan.github.io\u002Fme), [Yaohui Wang](https:\u002F\u002Fwyhsirius.github.io\u002F), [Xinyuan Chen](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=3fWSC8YAAAAJ), [Limin Wang](https:\u002F\u002Fwanglimin.github.io), [Dahua Lin](http:\u002F\u002Fdahua.site\u002F)\u003Csup>+\u003C\u002Fsup>, [Yu Qiao](http:\u002F\u002Fmmlab.siat.ac.cn\u002Fyuqiao\u002Findex.html)\u003Csup>+\u003C\u002Fsup>, [Ziwei Liu](https:\u002F\u002Fliuziwei7.github.io\u002F)\u003Csup>+\u003C\u002Fsup>\u003Cbr>\n> IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024\n\n![overall_structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_61f2997b202e.jpg)\n\n\nWe propose **VBench**, a comprehensive benchmark suite for video generative models. We design a comprehensive and hierarchical \u003Cb>Evaluation Dimension Suite\u003C\u002Fb> to decompose \"video generation quality\" into multiple well-defined dimensions to facilitate fine-grained and objective evaluation. For each dimension and each content category, we carefully design a \u003Cb>Prompt Suite\u003C\u002Fb> as test cases, and sample \u003Cb>Generated Videos\u003C\u002Fb> from a set of video generation models. For each evaluation dimension, we specifically design an \u003Cb>Evaluation Method Suite\u003C\u002Fb>, which uses carefully crafted method or designated pipeline for automatic objective evaluation. We also conduct \u003Cb>Human Preference Annotation\u003C\u002Fb> for the generated videos for each dimension, and show that VBench evaluation results are \u003Cb>well aligned with human perceptions\u003C\u002Fb>. VBench can provide valuable insights from multiple perspectives. \n\n\n**Note**: The code and README for the VBench components are located [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster), relative path: `.`.\n\n```bibtex\n@InProceedings{huang2023vbench,\n    title={{VBench}: Comprehensive Benchmark Suite for Video Generative Models},\n    author={Huang, Ziqi and He, Yinan and Yu, Jiashuo and Zhang, Fan and Si, Chenyang and Jiang, Yuming and Zhang, Yuanhan and Wu, Tianxing and Jin, Qingyang and Chanpaisit, Nattapol and Wang, Yaohui and Chen, Xinyuan and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n    booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n    year={2024}\n}\n```\n\n### (2) VBench++\n\n***TL;DR: Extends VBench with (1) VBench-I2V for image-to-video, (2) VBench-Long for long videos, and (3) VBench-Trustworthiness covering fairness, bias, and safety.***\n\n> [![VBench++ (TPAMI 2025)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench++-TPAMI%202025-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13503)  **VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models** \u003Cbr>\n> [Ziqi Huang](https:\u002F\u002Fziqihuangg.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [Fan Zhang](https:\u002F\u002Fgithub.com\u002Fzhangfan-p)\u003Csup>∗\u003C\u002Fsup>, [Xiaojie Xu](https:\u002F\u002Fgithub.com\u002Fxjxu21), [Yinan He](https:\u002F\u002Fgithub.com\u002Fyinanhe), [Jiashuo Yu](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=iH0Aq0YAAAAJ&hl=zh-CN), [Ziyue Dong](https:\u002F\u002Fgithub.com\u002FDZY-irene), [Qianli Ma](https:\u002F\u002Fgithub.com\u002FMqLeet), [Nattapol Chanpaisit](https:\u002F\u002Fnattapolchan.github.io\u002Fme), [Chenyang Si](https:\u002F\u002Fchenyangsi.top\u002F), [Yuming Jiang](https:\u002F\u002Fyumingj.github.io\u002F), [Yaohui Wang](https:\u002F\u002Fwyhsirius.github.io\u002F), [Xinyuan Chen](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=3fWSC8YAAAAJ), [Ying-Cong Chen](https:\u002F\u002Fwww.yingcong.me\u002F), [Limin Wang](https:\u002F\u002Fwanglimin.github.io), [Dahua Lin](http:\u002F\u002Fdahua.site\u002F)\u003Csup>+\u003C\u002Fsup>, [Yu Qiao](http:\u002F\u002Fmmlab.siat.ac.cn\u002Fyuqiao\u002Findex.html)\u003Csup>+\u003C\u002Fsup>, [Ziwei Liu](https:\u002F\u002Fliuziwei7.github.io\u002F)\u003Csup>+\u003C\u002Fsup>\u003Cbr>\n> IEEE Transactions on Pattern Analysis and Machine Intelligence (**TPAMI**), 2025\n\n![overall_structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_cfa21d802611.jpg)\n\n\n\u003Cb>VBench++\u003C\u002Fb> supports a wide range of video generation tasks, including text-to-video and image-to-video, with an adaptive Image Suite for fair evaluation across different settings. It evaluates not only technical quality but also the trustworthiness of generative models, offering a comprehensive view of model performance. We continually incorporate more video generative models into VBench to inform the community about the evolving landscape of video generation.\n\n\n**Note**: The code and README for the VBench++ components are located at:\n- (1) VBench-I2V (image-to-video): [link](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_i2v), relative path: `vbench2_beta_i2v`\n- (2) VBench-Long (long video evaluation): [link](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_long), relative path: `vbench2_beta_long`\n- (3) VBench-Trustworthiness (fairness, bias, and safety): [link](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_trustworthiness), relative path: `vbench2_beta_trustworthiness`\n\n*These modules belong to VBench++, not VBench, or VBench-2.0. However, to maintain backward compatibility for users who have already installed the repository, we preserve the original relative path names and provide this clarification here.\n*\n\n```bibtex\n@article{huang2025vbench++,\n    title={{VBench++}: Comprehensive and Versatile Benchmark Suite for Video Generative Models},\n    author={Huang, Ziqi and Zhang, Fan and Xu, Xiaojie and He, Yinan and Yu, Jiashuo and Dong, Ziyue and Ma, Qianli and Chanpaisit, Nattapol and Si, Chenyang and Jiang, Yuming and Wang, Yaohui and Chen, Xinyuan and Chen, Ying-Cong and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, \n    year={2025},\n    doi={10.1109\u002FTPAMI.2025.3633890}\n}\n```\n\n### (3) VBench-2.0\n\n***TL;DR: Extends VBench to evaluate intrinsic faithfulness — a key challenge for next-generation video generation models.***\n\n> [![VBench-2.0 Report (arXiv)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-2.0%20Report-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21755) **VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness**\u003Cbr>\n> [Dian Zheng](https:\u002F\u002Fzhengdian1.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [Ziqi Huang](https:\u002F\u002Fziqihuangg.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [Hongbo Liu](https:\u002F\u002Fgithub.com\u002FAlexios-hub), [Kai Zou](https:\u002F\u002Fgithub.com\u002FJacky-hate), [Yinan He](https:\u002F\u002Fgithub.com\u002Fyinanhe), [Fan Zhang](https:\u002F\u002Fgithub.com\u002Fzhangfan-p), [Yuanhan Zhang](https:\u002F\u002Fzhangyuanhan-ai.github.io\u002F),  [Jingwen He](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=GUxrycUAAAAJ&hl=zh-CN), [Wei-Shi Zheng](https:\u002F\u002Fwww.isee-ai.cn\u002F~zhwshi\u002F)\u003Csup>+\u003C\u002Fsup>, [Yu Qiao](http:\u002F\u002Fmmlab.siat.ac.cn\u002Fyuqiao\u002Findex.html)\u003Csup>+\u003C\u002Fsup>, [Ziwei Liu](https:\u002F\u002Fliuziwei7.github.io\u002F)\u003Csup>+\u003C\u002Fsup>\u003Cbr>\n\n![overall_structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_f289e847ae6d.jpg)\nOverview of VBench-2.0. (a) Scope of VBench-2.0. Video generative models have progressed from achieving superficial faithfulness in fundamental technical aspects such as pixel fidelity and basic prompt adherence, to addressing more complex challenges associated with intrinsic faithfulness, including commonsense reasoning, physics-based realism, human motion, and creative composition. While VBench primarily assessed early-stage technical quality, VBench-2.0 expands the benchmarking framework to evaluate these advanced capabilities, ensuring a more comprehensive assessment of next-generation models. (b) Evaluation Dimension of VBench-2.0. VBench-2.0 introduces a structured evaluation suite comprising five broad categories and 18 fine-grained capability dimensions.\n\n**Note**: The code and README for the VBench-2.0 components are located at [link](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0), relative path: `VBench-2.0`.\n\n```bibtex\n@article{zheng2025vbench2,\n    title={{VBench-2.0}: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness},\n    author={Zheng, Dian and Huang, Ziqi and Liu, Hongbo and Zou, Kai and He, Yinan and Zhang, Fan and Zhang, Yuanhan and He, Jingwen and Zheng, Wei-Shi and Qiao, Yu and Liu, Ziwei},\n    journal={arXiv preprint arXiv:2503.21755},\n    year={2025}\n}\n```\n\n\n\n\u003Ca name=\"updates\">\u003C\u002Fa>\n## :fire: Updates\n- [03\u002F2026] **VBench-I2V Arena** released: [![VBench-I2V Arena (View Generated Videos Here!)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench--I2V%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBenchI2V_Video_Arena) View the generated videos here, and vote for your preferred video. You can explore videos generated by your chosen models following your chosen text prompts.\n- [11\u002F2025] **VBench++** accepted to TPAMI: [![VBench++ (TPAMI 2025)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench++-TPAMI%202025-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13503)\n- [05\u002F2025] We support **evaluating customized videos** for VBench-2.0! See [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0#new-evaluating-single-dimension-of-your-own-videos) for instructions.\n- [04\u002F2025] **[Human Anomaly Detection for AIGC Videos](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0\u002Fvbench2\u002Fthird_party\u002FViTDetector):** We release the pipeline for evaluating human anatomical quality in AIGC videos, including a manually human anomaly dataset on real and AIGC videos, and the training pipeline for anomaly detection.\n- [03\u002F2025] :fire: **Major Update! We released [VBench-2.0](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0)!** :fire: Video generative models have progressed from achieving *superficial faithfulness* in fundamental technical aspects such as pixel fidelity and basic prompt adherence, to addressing more complex challenges associated with *intrinsic faithfulness*, including commonsense reasoning, physics-based realism, human motion, and creative composition. While VBench primarily assessed early-stage technical quality, VBench-2.0 expands the benchmarking framework to evaluate these advanced capabilities, ensuring a more comprehensive assessment of next-generation models.\n- [01\u002F2025] **PyPI Updates: v0.1.5** preprocessing bug fixes, torch>=2.0 support.\n- [01\u002F2025] **VBench Arena** released: [![Arena (View Generated Videos Here!)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Video_Arena) View the generated videos here, and vote for your preferred video. This demo features over 180,000 generated videos, and you can explore videos generated by your chosen models (we already support 40 models) following your chosen text prompts.\n\u003C!-- - [11\u002F2024] **VBench++** released: [![VBench++ Report (arXiv)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench++-arXiv%20Report-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13503) -->\n- [09\u002F2024] **VBench-Long Leaderboard** available: Our VBench-Long leaderboard now has 10 long video generation models. VBench leaderboard now has 40 text-to-video (both long and short) models. All video generative models are encouraged to participate! [![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n\n- [09\u002F2024] **PyPI Updates: PyPI package is updated to version [0.1.4](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Freleases\u002Ftag\u002Fv0.1.4):** bug fixes and multi-gpu inference.\n- [08\u002F2024] **Longer and More Descriptive Prompts**: [Available Here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts\u002Fgpt_enhanced_prompts)! We follow [CogVideoX](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FCogVideo?tab=readme-ov-file#prompt-optimization)'s prompt optimization technique to enhance VBench prompts using GPT-4o, making them longer and more descriptive without altering their original meaning.\n- [08\u002F2024] **VBench Leaderboard** update: Our leaderboard has 28 *T2V models*, 12 *I2V models* so far. All video generative models are encouraged to participate! [![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n- [06\u002F2024] :fire: **[VBench-Long](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_long)** :fire: is ready to use for evaluating longer Sora-like videos!\n- [06\u002F2024] **Model Info Documentation**: Information on video generative models in our [VBench Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard) \n is documented [HERE](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models).\n- [05\u002F2024] **PyPI Update**: PyPI package `vbench` is updated to version 0.1.2. This includes changes in the preprocessing for high-resolution images\u002Fvideos for `imaging_quality`, support for evaluating customized videos, and minor bug fixes.\n- [04\u002F2024] We release all the videos we sampled and used for VBench evaluation. [![Dataset Download](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDataset-Download-red?logo=googlechrome&logoColor=red)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13pH95aUN-hVgybUZJBx1e_08R6xhZs5X) See details [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos).\n- [03\u002F2024] :fire: **[VBench-Trustworthiness](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_trustworthiness)** :fire: We now support evaluating the **trustworthiness** (*e.g.*, culture, fairness, bias, safety) of video generative models.\n- [03\u002F2024] :fire: **[VBench-I2V](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_i2v)** :fire: We now support evaluating **Image-to-Video (I2V)** models. We also provide [Image Suite](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1fdOZKQ7HWZtgutCKKA7CMzOhMFUGv4Zx?usp=sharing).\n- [03\u002F2024] We support **evaluating customized videos**! See [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002F?tab=readme-ov-file#new-evaluate-your-own-videos) for instructions.\n- [02\u002F2024] **VBench** accepted to CVPR 2024 as Highlight: [![VBench Paper (CVPR 2024)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-CVPR%202024-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17982)\n- [01\u002F2024] PyPI package is released! [![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fvbench)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fvbench\u002F). Simply `pip install vbench`.\n- [12\u002F2023] :fire: **[VBench](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench?tab=readme-ov-file#usage)** :fire: Evaluation code released for 16 **Text-to-Video (T2V) evaluation** dimensions. \n    - `['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality', 'object_class', 'multiple_objects', 'human_action', 'color', 'spatial_relationship', 'scene', 'temporal_style', 'appearance_style', 'overall_consistency']`\n- [11\u002F2023] Prompt Suites released. (See prompt lists [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts))\n  \n\n\u003Ca name=\"evaluation_results\">\u003C\u002Fa>\n## :mortar_board: Evaluation Results\n\n***See our leaderboard for the most updated ranking and numerical results (with models like Gen-3, Kling, Pika)***. [![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_505e8984c687.jpg\" width=\"65%\"\u002F>\n\u003C\u002Fp>\nWe visualize the evaluation results of the 12 most recent top-performing long video generation models across 16 VBench dimensions.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_9f07cef35eda.jpg\" width=\"48%\" style=\"margin-right: 4%;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_f58147a55925.jpg\" width=\"48%\" \u002F>\n\u003C\u002Fp>\n\nAdditionally, we present radar charts separately for the evaluation results of open-source and closed-source models. The results are normalized per dimension for clearer comparisons.\n\n#### :trophy: Leaderboard\n\nSee numeric values at our [Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard) :1st_place_medal::2nd_place_medal::3rd_place_medal:\n\n\n\n#### :film_projector: Model Info\nSee [model info](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models) for video generation models we used for evaluation.\n\n\u003C!-- The values have been normalized for better readability of the chart. The normalization process involves scaling each set of performance values to a common scale between 0.3 and 0.8. The formula used for normalization is: (value - min value) \u002F (max value - min value). -->\n\n\u003Ca name=\"installation\">\u003C\u002Fa>\n## :hammer: Installation\n### Install with pip \n```\npip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118 # or any other PyTorch version with CUDA\u003C=12.1\npip install vbench\n```\n\nTo evaluate some video generation ability aspects, you need to install [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) via:\n   ```\n   pip install detectron2@git+https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git\n   ```\n    \nIf there is an error during [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) installation, see [here](https:\u002F\u002Fdetectron2.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Finstall.html). Detectron2 is working only with CUDA 12.1 or 11.X.\n\nDownload [VBench_full_info.json](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fblob\u002Fmaster\u002Fvbench\u002FVBench_full_info.json) to your running directory to read the benchmark prompt suites.\n\n### Install with git clone\n    git clone https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench.git\n    pip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118 # or other version with CUDA\u003C=12.1\n    pip install VBench\n    \nIf there is an error during [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) installation, see [here](https:\u002F\u002Fdetectron2.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Finstall.html).\n\n\u003Ca name=\"usage\">\u003C\u002Fa>\n## Usage\nUse VBench to evaluate videos, and video generative models.\n- A Side Note: VBench is designed for evaluating different models on a standard benchmark. Therefore, by default, we enforce evaluation on the **standard VBench prompt lists** to ensure **fair comparisons** among different video generation models. That's also why we give warnings when a required video is not found. This is done via defining the set of prompts in [VBench_full_info.json](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fblob\u002Fmaster\u002Fvbench\u002FVBench_full_info.json). However, we understand that many users would like to use VBench to evaluate their own videos, or videos generated from prompts that does not belong to the VBench Prompt Suite, so we also added the function of **Evaluating Your Own Videos**. Simply set `mode=custom_input`, and you can evaluate your own videos.\n\n\n### **[New]** Evaluate Your Own Videos\nWe support evaluating any video. Simply provide the path to the video file, or the path to the folder that contains your videos. There is no requirement on the videos' names.\n- Note: We support customized videos \u002F prompts for the following dimensions: `'subject_consistency', 'background_consistency', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality'`\n\n\nTo evaluate videos with customized input prompt, run our script with `--mode=custom_input`:\n```\npython evaluate.py \\\n    --dimension $DIMENSION \\\n    --videos_path \u002Fpath\u002Fto\u002Ffolder_or_video\u002F \\\n    --mode=custom_input\n```\nalternatively you can use our command:\n```\nvbench evaluate \\\n    --dimension $DIMENSION \\\n    --videos_path \u002Fpath\u002Fto\u002Ffolder_or_video\u002F \\\n    --mode=custom_input\n```\n\nTo evaluate using multiple gpus, we can use the following commands:\n```\ntorchrun --nproc_per_node=${GPUS} --standalone evaluate.py ...args...\n```\nor \n```\nvbench evaluate --ngpus=${GPUS} ...args...\n```\n\n### Evaluation on the Standard Prompt Suite of VBench\n\n##### Command Line \n```bash\nvbench evaluate --videos_path $VIDEO_PATH --dimension $DIMENSION\n```\nFor example:\n```bash\nvbench evaluate --videos_path \"sampled_videos\u002Flavie\u002Fhuman_action\" --dimension \"human_action\"\n```\n##### Python\n```python\nfrom vbench import VBench\nmy_VBench = VBench(device, \u003Cpath\u002Fto\u002FVBench_full_info.json>, \u003Cpath\u002Fto\u002Fsave\u002Fdir>)\nmy_VBench.evaluate(\n    videos_path = \u003Cvideo_path>,\n    name = \u003Cname>,\n    dimension_list = [\u003Cdimension>, \u003Cdimension>, ...],\n)\n```\nFor example: \n```python\nfrom vbench import VBench\nmy_VBench = VBench(device, \"vbench\u002FVBench_full_info.json\", \"evaluation_results\")\nmy_VBench.evaluate(\n    videos_path = \"sampled_videos\u002Flavie\u002Fhuman_action\",\n    name = \"lavie_human_action\",\n    dimension_list = [\"human_action\"],\n)\n```\n\n### Evaluation of Different Content Categories\n\n##### command line \n```bash\nvbench evaluate \\\n    --videos_path $VIDEO_PATH \\\n    --dimension $DIMENSION \\\n    --mode=vbench_category \\\n    --category=$CATEGORY\n```\nor \n```\npython evaluate.py \\\n    --dimension $DIMENSION \\\n    --videos_path \u002Fpath\u002Fto\u002Ffolder_or_video\u002F \\\n    --mode=vbench_category\n```\n\n### Example of Evaluating VideoCrafter-1.0\nWe have provided scripts to download VideoCrafter-1.0 samples, and the corresponding evaluation scripts.\n```\n# download sampled videos\nsh scripts\u002Fdownload_videocrafter1.sh\n\n# evaluate VideoCrafter-1.0\nsh scripts\u002Fevaluate_videocrafter1.sh\n```\n### Submit to Leaderboard\nWe have provided scripts for calculating the `Total Score`, `Quality Score`, and `Semantic Score` in the Leaderboard. You can run them locally to obtain the aggregate scores or as a final check before submitting to the Leaderboard.\n\n```bash\n# Pack the evaluation results into a zip file.\ncd evaluation_results\nzip -r ..\u002Fevaluation_results.zip .\n\n# [Optional] get the total score of your submission file.\npython scripts\u002Fcal_final_score.py --zip_file {path_to_evaluation_results.zip} --model_name {your_model_name}\n```\n\nYou can submit the json file to [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n\n### How to Calculate Total Score\n\nTo calculate the **Total Score**, we follow these steps:\n\n1. **Normalization**:  \n   Each dimension's results are normalized using the following formula:\n\n    ```bash\n    Normalized Score = (dim_score - min_val) \u002F (max_val - min_val)\n    ```\n\n2. **Quality Score**:  \n   The `Quality Score` is a weighted average of the following dimensions:  \n   **subject consistency**, **background consistency**, **temporal flickering**, **motion smoothness**, **aesthetic quality**, **imaging quality**, and **dynamic degree**.\n\n3. **Semantic Score**:  \n   The `Semantic Score` is a weighted average of the following dimensions:  \n   **object class**, **multiple objects**, **human action**, **color**, **spatial relationship**, **scene**, **appearance style**, **temporal style**, and **overall consistency**.\n\n\n4. **Weighted Average Calculation**:  \n   The **Total Score** is a weighted average of the `Quality Score` and `Semantic Score`:\n    ```bash\n    Total Score = w1 * Quality Score + w2 * Semantic Score\n    ```\n\n   \nThe minimum and maximum values used for normalization in each dimension, as well as the weighting coefficients for the average calculation, can be found in the `scripts\u002Fconstant.py` file.\n\n### Total Score for VBench-I2V\nFor Total Score Calculation for VBench-I2V, you can refer to [link](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_i2v#submit-to-leaderboard).\n\n\u003Ca name=\"pretrained_models\">\u003C\u002Fa>\n## :gem: Pre-Trained Models\n[Optional] Please download the pre-trained weights according to the guidance in the `model_path.txt` file for each model in the `pretrained` folder to `~\u002F.cache\u002Fvbench`.\n\n\u003Ca name=\"prompt_suite\">\u003C\u002Fa>\n## :bookmark_tabs: Prompt Suite\n\nWe provide prompt lists are at `prompts\u002F`. \n\nCheck out [details of prompt suites](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts), and instructions for [**how to sample videos for evaluation**](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts).\n\n\u003Ca name=\"sampled_videos\">\u003C\u002Fa>\n## :bookmark_tabs: Sampled Videos\n\n[![Dataset Download](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDataset-Download-red?logo=googlechrome&logoColor=red)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13pH95aUN-hVgybUZJBx1e_08R6xhZs5X)\n\nTo facilitate future research and to ensure full transparency, we release all the videos we sampled and used for VBench evaluation. You can download them on [Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13pH95aUN-hVgybUZJBx1e_08R6xhZs5X).\n\nSee detailed explanations of the sampled videos [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos).\n\nWe also provide detailed setting for the models under evaluation [here](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models).\n\n\u003Ca name=\"evaluation_method_suite\">\u003C\u002Fa>\n## :surfer: Evaluation Method Suite\n\nTo perform evaluation on one dimension, run this:\n```\npython evaluate.py --videos_path $VIDEOS_PATH --dimension $DIMENSION\n```\n- The complete list of dimensions:\n    ```\n    ['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality', 'object_class', 'multiple_objects', 'human_action', 'color', 'spatial_relationship', 'scene', 'temporal_style', 'appearance_style', 'overall_consistency']\n    ```\n\nAlternatively, you can evaluate multiple models and multiple dimensions using this script:\n```\nbash evaluate.sh\n```\n- The default sampled video paths:\n    ```\n    vbench_videos\u002F{model}\u002F{dimension}\u002F{prompt}-{index}.mp4\u002Fgif\n    ```\n\n\n\n#### Before evaluating the temporal flickering dimension, it is necessary to filter out the static videos first.\nTo filter static videos in the temporal flickering dimension, run this:\n```\n# This only filter out static videos whose prompt matches the prompt in the temporal_flickering.\npython static_filter.py --videos_path $VIDEOS_PATH\n```\nYou can adjust the filtering scope by:\n```\n# 1. Change the filtering scope to consider all files inside videos_path for filtering.\npython static_filter.py --videos_path $VIDEOS_PATH --filter_scope all\n\n# 2. Specify the path to a JSON file ($filename) to consider only videos whose prompts match those listed in $filename.\npython static_filter.py --videos_path $VIDEOS_PATH --filter_scope $filename\n```\n\n## :hearts: Acknowledgement\n\n#### :muscle: VBench Contributors\nOrder is based on the time joining the project: \n> [Ziqi Huang](https:\u002F\u002Fziqihuangg.github.io\u002F), [Yinan He](https:\u002F\u002Fgithub.com\u002Fyinanhe), [Jiashuo Yu](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=iH0Aq0YAAAAJ&hl=zh-CN), [Fan Zhang](https:\u002F\u002Fgithub.com\u002Fzhangfan-p), [Nattapol Chanpaisit](https:\u002F\u002Fnattapolchan.github.io\u002Fme), [Xiaojie Xu](https:\u002F\u002Fgithub.com\u002Fxjxu21), [Qianli Ma](https:\u002F\u002Fgithub.com\u002FMqLeet), [Ziyue Dong](https:\u002F\u002Fgithub.com\u002FDZY-irene), [Dian Zheng](https:\u002F\u002Fzhengdian1.github.io\u002F), [Hongbo Liu](https:\u002F\u002Fgithub.com\u002FAlexios-hub), [Kai Zou](https:\u002F\u002Fgithub.com\u002FJacky-hate)\n\n#### :hugs: Open-Sourced Repositories\nThis project wouldn't be possible without the following open-sourced repositories:\n[AMT](https:\u002F\u002Fgithub.com\u002FMCG-NKU\u002FAMT\u002F), [UMT](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002Funmasked_teacher), [RAM](https:\u002F\u002Fgithub.com\u002Fxinyu1205\u002Frecognize-anything), [CLIP](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP), [RAFT](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT), [GRiT](https:\u002F\u002Fgithub.com\u002FJialianW\u002FGRiT), [IQA-PyTorch](https:\u002F\u002Fgithub.com\u002Fchaofengc\u002FIQA-PyTorch\u002F), [ViCLIP](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FInternVideo\u002Ftree\u002Fmain\u002FData\u002FInternVid), and [LAION Aesthetic Predictor](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002Faesthetic-predictor).\n\n### Related Links\n\nWe are putting together [Awesome-Evaluation-of-Visual-Generation](https:\u002F\u002Fgithub.com\u002Fziqihuangg\u002FAwesome-Evaluation-of-Visual-Generation), which collects works for evaluating visual generation.\n\nOur related projects: [Evaluation Agent](https:\u002F\u002Fvchitect.github.io\u002FEvaluation-Agent-project\u002F), [Uni-MMMU](https:\u002F\u002Fvchitect.github.io\u002FUni-MMMU-Project\u002F), and [WorldLens](https:\u002F\u002Fworldbench.github.io\u002Fworldlens).\n\n```bibtex\n@InProceedings{zhang2024evaluationagent,\n      title = {Evaluation Agent: Efficient and Promptable Evaluation Framework for Visual Generative Models},\n      author = {Zhang, Fan and Tian, Shulin and Huang, Ziqi and Qiao, Yu and Liu, Ziwei},\n      booktitle={Annual Meeting of the Association for Computational Linguistics (ACL), 2025},\n      year = {2025}\n}\n\n@article{zou2025unimmmumassivemultidisciplinemultimodal,\n      title={{Uni-MMMU}: A Massive Multi-discipline Multimodal Unified Benchmark},\n      author = {Kai Zou and Ziqi Huang and Yuhao Dong and Shulin Tian and Dian Zheng and Hongbo Liu and Jingwen He and Bin Liu and Yu Qiao and Ziwei Liu},\n      journal={arXiv preprint arXiv:2510.13759},\n      year = {2025}\n}\n\n@article{worldlens,\n    title   = {{WorldLens}: Full-Spectrum Evaluations of Driving World Models in Real World},\n    author  = {Ao Liang and Lingdong Kong and Tianyi Yan and Hongsi Liu and Wesley Yang and Ziqi Huang and Wei Yin and Jialong Zuo and Yixuan Hu and Dekai Zhu and Dongyue Lu and Youquan Liu and Guangfeng Jiang and Linfeng Li and Xiangtai Li and Long Zhuo and Lai Xing Ng and Benoit R. Cottereau and Changxin Gao and Liang Pan and Wei Tsang Ooi and Ziwei Liu},\n    journal = {arXiv preprint arXiv:2512.xxxxx}\n    year    = {2025}\n}\n```\n\n### Contact Details\n\n**How to Reach Us:**\n- Code Issues: Please open an [issue](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues) in our GitHub repository for any problems or bugs.\n- Evaluation Requests: To submit your sampled videos for evaluation, please complete this [Google Form](https:\u002F\u002Fforms.gle\u002FwHk1xe7ecvVNj7yAA).\n- General Inquiries: **Check our [FAQ](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fblob\u002Fmaster\u002FREADME-FAQ.md)** for common questions. For other questions, contact Ziqi Huang at ZIQI002 [at] e [dot] ntu [dot] edu [dot] sg. \n\n\n\n\u003Ca name=\"citation_and_acknowledgement\">\u003C\u002Fa>\n## :black_nib: Citation\n\n   If you find our repo useful for your research, please consider citing our paper:\n\n   ```bibtex\n    @InProceedings{huang2023vbench,\n        title={{VBench}: Comprehensive Benchmark Suite for Video Generative Models},\n        author={Huang, Ziqi and He, Yinan and Yu, Jiashuo and Zhang, Fan and Si, Chenyang and Jiang, Yuming and Zhang, Yuanhan and Wu, Tianxing and Jin, Qingyang and Chanpaisit, Nattapol and Wang, Yaohui and Chen, Xinyuan and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n        booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n        year={2024}\n    }\n\n    @article{huang2025vbench++,\n        title={{VBench++}: Comprehensive and Versatile Benchmark Suite for Video Generative Models},\n        author={Huang, Ziqi and Zhang, Fan and Xu, Xiaojie and He, Yinan and Yu, Jiashuo and Dong, Ziyue and Ma, Qianli and Chanpaisit, Nattapol and Si, Chenyang and Jiang, Yuming and Wang, Yaohui and Chen, Xinyuan and Chen, Ying-Cong and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n        journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, \n        year={2025},\n        doi={10.1109\u002FTPAMI.2025.3633890}\n    }\n\n    @article{zheng2025vbench2,\n        title={{VBench-2.0}: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness},\n        author={Zheng, Dian and Huang, Ziqi and Liu, Hongbo and Zou, Kai and He, Yinan and Zhang, Fan and Zhang, Yuanhan and He, Jingwen and Zheng, Wei-Shi and Qiao, Yu and Liu, Ziwei},\n        journal={arXiv preprint arXiv:2503.21755},\n        year={2025}\n    }\n   ```\n","![vbench_logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_9c7b090edd87.jpg)\n\n\n\u003C!-- [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2311.99999-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.99999) -->\n[![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n[![VBench Arena (查看生成的视频请点击这里！)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Video_Arena)\n[![VBench-2.0 Arena (查看生成的视频请点击这里！)](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench2.0%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench2.0_Video_Arena)\n[![项目主页](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-Website-green?logo=googlechrome&logoColor=green)](https:\u002F\u002Fvchitect.github.io\u002FVBench-project\u002F)\n[![项目主页](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench%202.0-Website-green?logo=googlechrome&logoColor=green)](https:\u002F\u002Fvchitect.github.io\u002FVBench-2.0-project\u002F)\n[![数据集下载](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDataset-Download-red?logo=googlechrome&logoColor=red)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1on66fnZ8atRoLDimcAXMxSwRxqN8_0yS?usp=sharing)\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fvbench)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fvbench\u002F)\n[![视频](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-Video-c4302b?logo=youtube&logoColor=red)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7IhCC8Qqn8Y)\n[![视频](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench%202.0-Video-c4302b?logo=youtube&logoColor=red)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kJrzKy9tgAc)\n![访问者](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_9f71ff971f14.png)\n\n\n本仓库提供了**VBench系列**工作的统一实现，支持对视频生成模型在广泛能力与场景下的全面评估。\n\n如果您的问题未在此README中得到解答，请联系黄子琪，邮箱： ZIQI002 [at] e [dot] ntu [dot] edu [dot] sg。\n\n## 目录\n- [概述](#overview) - *请参阅本节以了解各组件的位置以及VBench、VBench++和VBench-2.0之间的区别。*\n- [更新](#updates)\n- [评估结果](#evaluation_results)\n- [视频生成模型信息](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models)\n- [安装](#installation)\n- [使用](#usage)\n- [提示词套件](#prompt_suite)\n- [采样视频](#sampled_videos)\n- [评估方法套件](#evaluation_method_suite)\n- [引用与致谢](#citation_and_acknowledgement)\n\n\n\u003Ca name=\"overview\">\u003C\u002Fa>\n## :mega: 概述\n\n本仓库提供了**VBench系列**工作的统一实现，支持对视频生成模型在广泛能力与场景下的全面评估。\n\n### (1) VBench\n\n***简而言之：评估视频生成——基准测试 • 评估维度 • 评估方法 • 人类一致性 • 洞察***\n\n> [![VBench论文（CVPR 2024）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-CVPR%202024-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17982) **VBench：面向视频生成模型的综合基准测试套件**  \u003Cbr>\n> 黄子琪（https:\u002F\u002Fziqihuangg.github.io\u002F）\u003Csup>∗\u003C\u002Fsup>, 何一楠（https:\u002F\u002Fgithub.com\u002Fyinanhe\u002F）\u003Csup>∗\u003C\u002Fsup>, 于嘉硕（https:\u002F\u002Fscholar.google.com\u002Fcitations?user=iH0Aq0YAAAAJ&hl=zh-CN）\u003Csup>∗\u003C\u002Fsup>, 张帆（https:\u002F\u002Fgithub.com\u002Fzhangfan-p\u002F）\u003Csup>∗\u003C\u002Fsup>, 史晨阳（https:\u002F\u002Fchenyangsi.top\u002F）、蒋宇明（https:\u002F\u002Fyumingj.github.io\u002F）、张元翰（https:\u002F\u002Fzhangyuanhan-ai.github.io\u002F）、吴天行（https:\u002F\u002Ftianxingwu.github.io\u002F）、金庆阳（https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench）、纳塔波尔·差恩派西特（https:\u002F\u002Fnattapolchan.github.io\u002Fme）、王耀辉（https:\u002F\u002Fwyhsirius.github.io\u002F）、陈欣源（https:\u002F\u002Fscholar.google.com\u002Fcitations?user=3fWSC8YAAAAJ）、王利民（https:\u002F\u002Fwanglimin.github.io）、林大华（http:\u002F\u002Fdahua.site\u002F）\u003Csup>+\u003C\u002Fsup>, 乔宇（http:\u002F\u002Fmmlab.siat.ac.cn\u002Fyuqiao\u002Findex.html）\u003Csup>+\u003C\u002Fsup>, 刘子威（https:\u002F\u002Fliuziwei7.github.io\u002F）\u003Csup>+\u003C\u002Fsup>\u003Cbr>\n> IEEE\u002FCVF计算机视觉与模式识别会议（**CVPR**），2024年\n\n![overall_structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_61f2997b202e.jpg)\n\n\n我们提出了**VBench**，一个面向视频生成模型的综合基准测试套件。我们设计了一个全面且分层的\u003Cb>评估维度套件\u003C\u002Fb>, 将“视频生成质量”分解为多个明确的维度，以促进精细化和客观化的评估。针对每个维度和每类内容，我们精心设计了\u003Cb>提示词套件\u003C\u002Fb>作为测试用例，并从一组视频生成模型中采样\u003Cb>生成的视频\u003C\u002Fb>。对于每个评估维度，我们专门设计了\u003Cb>评估方法套件\u003C\u002Fb>, 采用精心设计的方法或指定的流程进行自动化的客观评估。此外，我们还对每个维度的生成视频进行了\u003Cb>人类偏好标注\u003C\u002Fb>, 并证明VBench的评估结果与\u003Cb>人类感知高度一致\u003C\u002Fb>。VBench可以从多个角度提供有价值的洞察。\n\n\n**注意**：VBench各组件的代码和README文件位于[此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster)，相对路径为`.`。\n\n```bibtex\n@InProceedings{huang2023vbench,\n    title={{VBench}: Comprehensive Benchmark Suite for Video Generative Models},\n    author={Huang, Ziqi and He, Yinan and Yu, Jiashuo and Zhang, Fan and Si, Chenyang and Jiang, Yuming and Zhang, Yuanhan and Wu, Tianxing and Jin, Qingyang and Chanpaisit, Nattapol and Wang, Yaohui and Chen, Xinyuan and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n    booktitle={Proceedings of the IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition},\n    year={2024}\n}\n```\n\n### (2) VBench++\n\n***简而言之：在VBench的基础上扩展了三个模块，分别是(1)用于图像到视频生成的VBench-I2V、(2)用于长视频评估的VBench-Long，以及(3)涵盖公平性、偏差和安全性的VBench-Trustworthiness。***\n\n> [![VBench++（TPAMI 2025）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench++-TPAMI%202025-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13503)  **VBench++：面向视频生成模型的全面且多功能基准测试套件** \u003Cbr>\n> [黄子琪](https:\u002F\u002Fziqihuangg.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [张帆](https:\u002F\u002Fgithub.com\u002Fzhangfan-p)\u003Csup>∗\u003C\u002Fsup>, [徐晓杰](https:\u002F\u002Fgithub.com\u002Fxjxu21), [何一楠](https:\u002F\u002Fgithub.com\u002Fyinanhe), [于家硕](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=iH0Aq0YAAAAJ&hl=zh-CN), [董子悦](https:\u002F\u002Fgithub.com\u002FDZY-irene), [马倩莉](https:\u002F\u002Fgithub.com\u002FMqLeet), [纳塔波尔·差派西特](https:\u002F\u002Fnattapolchan.github.io\u002Fme), [司晨阳](https:\u002F\u002Fchenyangsi.top\u002F), [姜宇明](https:\u002F\u002Fyumingj.github.io\u002F), [王耀辉](https:\u002F\u002Fwyhsirius.github.io\u002F), [陈欣源](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=3fWSC8YAAAAJ), [陈英聪](https:\u002F\u002Fwww.yingcong.me\u002F), [王利民](https:\u002F\u002Fwanglimin.github.io), [林大华](http:\u002F\u002Fdahua.site\u002F)\u003Csup>+\u003C\u002Fsup>, [乔宇](http:\u002F\u002Fmmlab.siat.ac.cn\u002Fyuqiao\u002Findex.html)\u003Csup>+\u003C\u002Fsup>, [刘子威](https:\u002F\u002Fliuziwei7.github.io\u002F)\u003Csup>+\u003C\u002Fsup>\u003Cbr>\n> IEEE模式分析与机器智能汇刊（**TPAMI**），2025年\n\n![overall_structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_cfa21d802611.jpg)\n\n\n\u003Cb>VBench++\u003C\u002Fb> 支持广泛的视频生成任务，包括文本到视频和图像到视频，并配备自适应的图像集，以在不同场景下实现公正的评估。它不仅评估技术质量，还关注生成模型的可信度，从而提供对模型性能的全面视角。我们持续将更多视频生成模型纳入VBench，以便向社区展示视频生成领域的最新发展动态。\n\n\n**注意**：VBench++ 各组件的代码及README位于：\n- (1) VBench-I2V（图像到视频）：[链接](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_i2v)，相对路径为 `vbench2_beta_i2v`\n- (2) VBench-Long（长视频评估）：[链接](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_long)，相对路径为 `vbench2_beta_long`\n- (3) VBench-Trustworthiness（公平性、偏差和安全性）：[链接](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_trustworthiness)，相对路径为 `vbench2_beta_trustworthiness`\n\n*这些模块属于VBench++，而非VBench或VBench-2.0。然而，为了保持对已安装该仓库用户的向后兼容性，我们保留了原有的相对路径名称，并在此予以说明。\n*\n\n```bibtex\n@article{huang2025vbench++,\n    title={{VBench++}: Comprehensive and Versatile Benchmark Suite for Video Generative Models},\n    author={Huang, Ziqi and Zhang, Fan and Xu, Xiaojie and He, Yinan and Yu, Jiashuo and Dong, Ziyue and Ma, Qianli and Chanpaisit, Nattapol and Si, Chenyang and Jiang, Yuming and Wang, Yaohui and Chen, Xinyuan and Chen, Ying-Cong and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, \n    year={2025},\n    doi={10.1109\u002FTPAMI.2025.3633890}\n}\n```\n\n### (3) VBench-2.0\n\n***简而言之：扩展VBench以评估内在忠实性——这是下一代视频生成模型面临的关键挑战。***\n\n> [![VBench-2.0报告（arXiv）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-2.0%20Report-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.21755) **VBench-2.0：推进面向内在忠实性的视频生成基准测试套件**\u003Cbr>\n> [郑典](https:\u002F\u002Fzhengdian1.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [黄子琪](https:\u002F\u002Fziqihuangg.github.io\u002F)\u003Csup>∗\u003C\u002Fsup>, [刘洪波](https:\u002F\u002Fgithub.com\u002FAlexios-hub), [邹凯](https:\u002F\u002Fgithub.com\u002FJacky-hate), [何一楠](https:\u002F\u002Fgithub.com\u002Fyinanhe), [张帆](https:\u002F\u002Fgithub.com\u002Fzhangfan-p), [张元翰](https:\u002F\u002Fzhangyuanhan-ai.github.io\u002F)、 [何静雯](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=GUxrycUAAAAJ&hl=zh-CN), [郑伟石](https:\u002F\u002Fwww.isee-ai.cn\u002F~zhwshi\u002F)\u003Csup>+\u003C\u002Fsup>, [乔宇](http:\u002F\u002Fmmlab.siat.ac.cn\u002Fyuqiao\u002Findex.html)\u003Csup>+\u003C\u002Fsup>, [刘子威](https:\u002F\u002Fliuziwei7.github.io\u002F)\u003Csup>+\u003C\u002Fsup>\u003Cbr>\n\n![overall_structure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_f289e847ae6d.jpg)\nVBench-2.0概览。(a) VBench-2.0的范围。视频生成模型已经从在像素保真度和基本提示遵循等基础技术层面实现表面忠实性，发展到应对更复杂的内在忠实性挑战，包括常识推理、物理现实感、人体运动和创意构图等。尽管VBench主要评估早期的技术质量，但VBench-2.0扩展了基准测试框架，以评估这些高级能力，从而确保对下一代模型进行更全面的评价。(b) VBench-2.0的评估维度。VBench-2.0引入了一个结构化的评估体系，包含五大类和18个细粒度的能力维度。\n\n**注意**：VBench-2.0各组件的代码及README位于[链接](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0)，相对路径为 `VBench-2.0`。\n\n```bibtex\n@article{zheng2025vbench2,\n    title={{VBench-2.0}: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness},\n    author={Zheng, Dian and Huang, Ziqi and Liu, Hongbo and Zou, Kai and He, Yinan and Zhang, Fan and Zhang, Yuanhan and He, Jingwen and Zheng, Wei-Shi and Qiao, Yu and Liu, Ziwei},\n    journal={arXiv preprint arXiv:2503.21755},\n    year={2025}\n}\n```\n\n\n\n\u003Ca name=\"updates\">\u003C\u002Fa>\n\n## :fire: 更新\n- [03\u002F2026] **VBench-I2V Arena** 发布：[![VBench-I2V Arena（在此查看生成的视频！）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench--I2V%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBenchI2V_Video_Arena) 在这里查看生成的视频，并为您喜欢的视频投票。您可以根据所选文本提示，探索由您选择的模型生成的视频。\n- [11\u002F2025] **VBench++** 被 TPAMI 接收：[![VBench++（TPAMI 2025）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench++-TPAMI%202025-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13503)\n- [05\u002F2025] 我们支持对 VBench-2.0 进行 **自定义视频评估**！请参阅 [此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0#new-evaluating-single-dimension-of-your-own-videos) 获取说明。\n- [04\u002F2025] **[AIGC 视频中的人体异常检测](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0\u002Fvbench2\u002Fthird_party\u002FViTDetector)：** 我们发布了用于评估 AIGC 视频中人体解剖质量的流程，包括一个关于真实和 AIGC 视频的人体异常手动标注数据集，以及异常检测的训练流程。\n- [03\u002F2025] :fire: **重大更新！我们发布了 [VBench-2.0](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002FVBench-2.0)！** :fire: 视频生成模型已经从在像素保真度和基本提示遵循等基础技术方面实现 *表面忠实性*，发展到解决与 *内在忠实性* 相关的更复杂挑战，包括常识推理、基于物理的真实感、人体动作和创意构图。虽然 VBench 主要评估早期的技术质量，但 VBench-2.0 扩展了基准测试框架，以评估这些高级能力，从而确保对下一代模型进行更全面的评估。\n- [01\u002F2025] **PyPI 更新：v0.1.5** 预处理错误修复，支持 torch>=2.0。\n- [01\u002F2025] **VBench Arena** 发布：[![Arena（在此查看生成的视频！）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20VBench%20Arena-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Video_Arena) 在这里查看生成的视频，并为您喜欢的视频投票。该演示包含超过 18 万个生成的视频，您可以根据所选文本提示，探索由您选择的模型（我们已支持 40 个模型）生成的视频。\n\u003C!-- - [11\u002F2024] **VBench++** 发布：[![VBench++ 报告（arXiv）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench++-arXiv%20Report-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.13503) -->\n- [09\u002F2024] **VBench-Long 排行榜** 上线：我们的 VBench-Long 榜单现已收录 10 款长视频生成模型。VBench 榜单目前共有 40 款文生视频（长短皆有）模型。欢迎所有视频生成模型参与！[![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n\n- [09\u002F2024] **PyPI 更新：PyPI 包已更新至版本 [0.1.4](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Freleases\u002Ftag\u002Fv0.1.4)：** 错误修复及多 GPU 推理支持。\n- [08\u002F2024] **更长且更具描述性的提示**：[可在此获取](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts\u002Fgpt_enhanced_prompts)! 我们借鉴 [CogVideoX](https:\u002F\u002Fgithub.com\u002FTHUDM\u002FCogVideo?tab=readme-ov-file#prompt-optimization) 的提示优化技术，利用 GPT-4o 对 VBench 提示进行增强，使其更长、更具描述性，同时不改变其原意。\n- [08\u002F2024] **VBench 榜单** 更新：我们的榜单目前已有 28 款 *T2V 模型* 和 12 款 *I2V 模型*。欢迎所有视频生成模型参与！[![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n- [06\u002F2024] :fire: **[VBench-Long](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_long)** :fire: 已准备就绪，可用于评估类似 Sora 的较长视频！\n- [06\u002F2024] **模型信息文档**：关于我们 [VBench 榜单](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard) 中视频生成模型的信息，已在 [此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models) 文档化。\n- [05\u002F2024] **PyPI 更新**：PyPI 包 `vbench` 已更新至 0.1.2 版本。此次更新包括针对高分辨率图像\u002F视频的预处理改进（用于 `imaging_quality`）、支持自定义视频评估，以及一些小的错误修复。\n- [04\u002F2024] 我们发布了所有用于 VBench 评估的样本视频。[![数据集下载](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDataset-Download-red?logo=googlechrome&logoColor=red)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13pH95aUN-hVgybUZJBx1e_08R6xhZs5X) 详情请见 [此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos)。\n- [03\u002F2024] :fire: **[VBench-Trustworthiness](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_trustworthiness)** :fire: 我们现在支持评估视频生成模型的 **可信度**（例如文化、公平性、偏见、安全性）。\n- [03\u002F2024] :fire: **[VBench-I2V](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_i2v)** :fire: 我们现在支持评估 **图像转视频（I2V）** 模型。我们还提供了 [图像套件](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1fdOZKQ7HWZtgutCKKA7CMzOhMFUGv4Zx?usp=sharing)。\n- [03\u002F2024] 我们支持 **自定义视频评估**！请参阅 [此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002F?tab=readme-ov-file#new-evaluate-your-own-videos) 获取说明。\n- [02\u002F2024] **VBench** 作为亮点被 CVPR 2024 接受：[![VBench 论文（CVPR 2024）](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVBench-CVPR%202024-b31b1b?logo=arxiv&logoColor=red)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.17982)\n- [01\u002F2024] PyPI 包发布！[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fvbench)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fvbench\u002F) 只需 `pip install vbench` 即可。\n- [12\u002F2023] :fire: **[VBench](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench?tab=readme-ov-file#usage)** :fire: 16 个 **文生视频（T2V）评估** 维度的评估代码已发布。\n    - `['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality', 'object_class', 'multiple_objects', 'human_action', 'color', 'spatial_relationship', 'scene', 'temporal_style', 'appearance_style', 'overall_consistency']`\n- [11\u002F2023] 提示套件发布。（提示列表请见 [此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts)）\n  \n\n\u003Ca name=\"evaluation_results\">\u003C\u002Fa>\n\n## :mortar_board: 评估结果\n\n***请查看我们的排行榜，获取最新排名和数值结果（包含 Gen-3、Kling、Pika 等模型）***。[![HuggingFace](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Leaderboard-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_505e8984c687.jpg\" width=\"65%\"\u002F>\n\u003C\u002Fp>\n我们可视化了16个VBench维度下，12个最新表现最佳的长视频生成模型的评估结果。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_9f07cef35eda.jpg\" width=\"48%\" style=\"margin-right: 4%;\" \u002F>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_readme_f58147a55925.jpg\" width=\"48%\" \u002F>\n\u003C\u002Fp>\n\n此外，我们还分别展示了开源模型和闭源模型的雷达图评估结果。为了更清晰地进行比较，结果已在每个维度上进行了归一化处理。\n\n#### :trophy: 排行榜\n\n请在我们的[排行榜](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)中查看具体数值：1st_place_medal::2nd_place_medal::3rd_place_medal:\n\n#### :film_projector: 模型信息\n有关我们用于评估的视频生成模型的详细信息，请参阅[模型信息](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models)。\n\n\u003C!-- 数值已进行归一化处理，以便更清晰地展示图表。归一化过程是将每组性能值缩放到0.3到0.8之间的共同尺度。归一化公式为：(值 - 最小值) \u002F (最大值 - 最小值)。 -->\n\n\u003Ca name=\"installation\">\u003C\u002Fa>\n## :hammer: 安装\n### 使用 pip 安装\n```\npip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118 # 或其他支持 CUDA\u003C=12.1 的 PyTorch 版本\npip install vbench\n```\n\n若需评估部分视频生成能力，您需要通过以下方式安装 [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2)：\n   ```\n   pip install detectron2@git+https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2.git\n   ```\n    \n如果在安装 [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) 时出现错误，请参阅[此处](https:\u002F\u002Fdetectron2.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Finstall.html)。Detectron2 仅支持 CUDA 12.1 或 11.X。\n\n请将 [VBench_full_info.json](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fblob\u002Fmaster\u002Fvbench\u002FVBench_full_info.json) 下载到您的运行目录，以读取基准测试提示集。\n\n### 使用 git 克隆安装\n    git clone https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench.git\n    pip install torch torchvision --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118 # 或其他支持 CUDA\u003C=12.1 的版本\n    pip install VBench\n    \n如果在安装 [detectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) 时遇到问题，请参阅[此处](https:\u002F\u002Fdetectron2.readthedocs.io\u002Fen\u002Flatest\u002Ftutorials\u002Finstall.html)。\n\n\u003Ca name=\"usage\">\u003C\u002Fa>\n## 使用方法\n使用 VBench 评估视频及视频生成模型。\n- 补充说明：VBench 旨在基于标准基准测试评估不同模型。因此，默认情况下，我们会强制使用 **标准 VBench 提示列表** 进行评估，以确保不同视频生成模型之间的 **公平比较**。这也是为什么当找不到所需视频时会发出警告的原因。这是通过在 [VBench_full_info.json](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fblob\u002Fmaster\u002Fvbench\u002FVBench_full_info.json) 中定义提示集来实现的。然而，我们也理解许多用户希望使用 VBench 评估自己的视频，或使用不属于 VBench 提示集的提示生成的视频，因此我们还增加了 **评估自定义视频** 的功能。只需设置 `mode=custom_input`，即可评估您自己的视频。\n\n\n### **[新]** 评估自定义视频\n我们支持评估任意视频。只需提供视频文件的路径，或包含您视频的文件夹路径即可，对视频名称无要求。\n- 注意：我们支持针对以下维度的自定义视频\u002F提示：`'subject_consistency', 'background_consistency', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality'`\n\n\n要使用自定义输入提示评估视频，请运行我们的脚本并添加 `--mode=custom_input` 参数：\n```\npython evaluate.py \\\n    --dimension $DIMENSION \\\n    --videos_path \u002Fpath\u002Fto\u002Ffolder_or_video\u002F \\\n    --mode=custom_input\n```\n或者您也可以使用以下命令：\n```\nvbench evaluate \\\n    --dimension $DIMENSION \\\n    --videos_path \u002Fpath\u002Fto\u002Ffolder_or_video\u002F \\\n    --mode=custom_input\n```\n\n若需使用多张 GPU 进行评估，可使用以下命令：\n```\ntorchrun --nproc_per_node=${GPUS} --standalone evaluate.py ...args...\n```\n或\n```\nvbench evaluate --ngpus=${GPUS} ...args...\n```\n\n### 基于 VBench 标准提示集的评估\n\n##### 命令行\n```bash\nvbench evaluate --videos_path $VIDEO_PATH --dimension $DIMENSION\n```\n例如：\n```bash\nvbench evaluate --videos_path \"sampled_videos\u002Flavie\u002Fhuman_action\" --dimension \"human_action\"\n```\n##### Python\n```python\nfrom vbench import VBench\nmy_VBench = VBench(device, \u003Cpath\u002Fto\u002FVBench_full_info.json>, \u003Cpath\u002Fto\u002Fsave\u002Fdir>)\nmy_VBench.evaluate(\n    videos_path = \u003Cvideo_path>,\n    name = \u003Cname>,\n    dimension_list = [\u003Cdimension>, \u003Cdimension>, ...],\n)\n```\n例如：\n```python\nfrom vbench import VBench\nmy_VBench = VBench(device, \"vbench\u002FVBench_full_info.json\", \"evaluation_results\")\nmy_VBench.evaluate(\n    videos_path = \"sampled_videos\u002Flavie\u002Fhuman_action\",\n    name = \"lavie_human_action\",\n    dimension_list = [\"human_action\"],\n)\n```\n\n### 不同内容类别的评估\n\n##### 命令行\n```bash\nvbench evaluate \\\n    --videos_path $VIDEO_PATH \\\n    --dimension $DIMENSION \\\n    --mode=vbench_category \\\n    --category=$CATEGORY\n```\n或\n```\npython evaluate.py \\\n    --dimension $DIMENSION \\\n    --videos_path \u002Fpath\u002Fto\u002Ffolder_or_video\u002F \\\n    --mode=vbench_category\n```\n\n### VideoCrafter-1.0 的评估示例\n我们提供了下载 VideoCrafter-1.0 样本及其相应评估脚本的工具。\n```\n# 下载样本视频\nsh scripts\u002Fdownload_videocrafter1.sh\n\n# 评估 VideoCrafter-1.0\nsh scripts\u002Fevaluate_videocrafter1.sh\n```\n\n### 提交至排行榜\n我们提供了用于计算排行榜中 `总分`、`质量分` 和 `语义分` 的脚本。您可以在本地运行这些脚本以获得汇总分数，或在提交至排行榜前作为最终检查。\n\n```bash\n# 将评估结果打包成 zip 文件。\ncd evaluation_results\nzip -r ..\u002Fevaluation_results.zip .\n\n# 【可选】获取您的提交文件的总分。\npython scripts\u002Fcal_final_score.py --zip_file {path_to_evaluation_results.zip} --model_name {your_model_name}\n```\n\n您可以将 json 文件提交至 [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard)\n\n### 如何计算总分\n\n要计算**总分**，我们按照以下步骤进行：\n\n1. **归一化**：  \n   每个维度的结果都使用以下公式进行归一化：\n\n    ```bash\n    归一化分数 = (dim_score - min_val) \u002F (max_val - min_val)\n    ```\n\n2. **质量得分**：  \n   `质量得分`是以下维度的加权平均值：  \n   **主体一致性**、**背景一致性**、**时间闪烁**、**运动流畅性**、**美学质量**、**成像质量**以及**动态程度**。\n\n3. **语义得分**：  \n   `语义得分`是以下维度的加权平均值：  \n   **物体类别**、**多物体**、**人类动作**、**颜色**、**空间关系**、**场景**、**外观风格**、**时间风格**以及**整体一致性**。\n\n\n4. **加权平均计算**：  \n   **总分**是`质量得分`和`语义得分`的加权平均：\n    ```bash\n    总分 = w1 * 质量得分 + w2 * 语义得分\n    ```\n\n   \n每个维度用于归一化的最小值和最大值，以及平均计算中的权重系数，都可以在`scripts\u002Fconstant.py`文件中找到。\n\n### VBench-I2V 的总分\n关于 VBench-I2V 的总分计算，可以参考[链接](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_i2v#submit-to-leaderboard)。\n\n\u003Ca name=\"pretrained_models\">\u003C\u002Fa>\n## :gem: 预训练模型\n[可选] 请根据 `pretrained` 文件夹中各模型的 `model_path.txt` 文件中的指导，将预训练权重下载到 `~\u002F.cache\u002Fvbench` 目录下。\n\n\u003Ca name=\"prompt_suite\">\u003C\u002Fa>\n## :bookmark_tabs: 提示词套件\n\n我们提供的提示词列表位于 `prompts\u002F` 目录下。\n\n查看[提示词套件的详细信息](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts)，以及关于[如何采样视频用于评估](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fprompts)的说明。\n\n\u003Ca name=\"sampled_videos\">\u003C\u002Fa>\n## :bookmark_tabs: 采样视频\n\n[![数据集下载](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDataset-Download-red?logo=googlechrome&logoColor=red)](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13pH95aUN-hVgybUZJBx1e_08R6xhZs5X)\n\n为了便于未来的研究并确保完全透明，我们发布了所有用于 VBench 评估的采样视频。您可以在[Google Drive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13pH95aUN-hVgybUZJBx1e_08R6xhZs5X)上下载这些视频。\n\n有关采样视频的详细说明，请参阅[此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos)。\n\n我们还提供了被评估模型的详细设置[在此处](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fsampled_videos#what-are-the-details-of-the-video-generation-models)。\n\n\u003Ca name=\"evaluation_method_suite\">\u003C\u002Fa>\n## :surfer: 评估方法套件\n\n要对某一维度进行评估，运行以下命令：\n```\npython evaluate.py --videos_path $VIDEOS_PATH --dimension $DIMENSION\n```\n- 完整的维度列表如下：\n    ```\n    ['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_quality', 'object_class', 'multiple_objects', 'human_action', 'color', 'spatial_relationship', 'scene', 'temporal_style', 'appearance_style', 'overall_consistency']\n    ```\n\n或者，您也可以使用此脚本同时评估多个模型和多个维度：\n```\nbash evaluate.sh\n```\n- 默认的采样视频路径为：\n    ```\n    vbench_videos\u002F{model}\u002F{dimension}\u002F{prompt}-{index}.mp4\u002Fgif\n    ```\n\n\n\n#### 在评估时间闪烁维度之前，必须先过滤掉静态视频。\n要过滤时间闪烁维度中的静态视频，运行以下命令：\n```\n# 此命令仅过滤掉与时间闪烁维度提示匹配的静态视频。\npython static_filter.py --videos_path $VIDEOS_PATH\n```\n您可以调整过滤范围，方法如下：\n```\n# 1. 将过滤范围改为考虑 videos_path 内的所有文件进行过滤。\npython static_filter.py --videos_path $VIDEOS_PATH --filter_scope all\n\n# 2. 指定一个 JSON 文件路径 ($filename)，只考虑提示与 $filename 中列出的提示匹配的视频。\npython static_filter.py --videos_path $VIDEOS_PATH --filter_scope $filename\n```\n\n## :hearts: 致谢\n\n#### :muscle: VBench 贡献者\n顺序按加入项目的时间排列：\n> [黄子奇](https:\u002F\u002Fziqihuangg.github.io\u002F)、[何一楠](https:\u002F\u002Fgithub.com\u002Fyinanhe)、[于家硕](https:\u002F\u002Fscholar.google.com\u002Fcitations?user=iH0Aq0YAAAAJ&hl=zh-CN)、[张帆](https:\u002F\u002Fgithub.com\u002Fzhangfan-p)、[纳塔波尔·差恩派西特](https:\u002F\u002Fnattapolchan.github.io\u002Fme)、[徐晓杰](https:\u002F\u002Fgithub.com\u002Fxjxu21)、[马千里](https:\u002F\u002Fgithub.com\u002FMqLeet)、[董子悦](https:\u002F\u002Fgithub.com\u002FDZY-irene)、[郑典](https:\u002F\u002Fzhengdian1.github.io\u002F)、[刘洪波](https:\u002F\u002Fgithub.com\u002FAlexios-hub)、[邹凯](https:\u002F\u002Fgithub.com\u002FJacky-hate)\n\n#### :hugs: 开源仓库\n如果没有以下开源仓库，本项目将无法实现：\n[AMT](https:\u002F\u002Fgithub.com\u002FMCG-NKU\u002FAMT\u002F)、[UMT](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002Funmasked_teacher)、[RAM](https:\u002F\u002Fgithub.com\u002Fxinyu1205\u002Frecognize-anything)、[CLIP](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP)、[RAFT](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT)、[GRiT](https:\u002F\u002Fgithub.com\u002FJialianW\u002FGRiT)、[IQA-PyTorch](https:\u002F\u002Fgithub.com\u002Fchaofengc\u002FIQA-PyTorch\u002F)、[ViCLIP](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FInternVideo\u002Ftree\u002Fmain\u002FData\u002FInternVid)以及[LAION 美学预测器](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002Faesthetic-predictor)。\n\n### 相关链接\n\n我们正在整理 [Awesome-Evaluation-of-Visual-Generation](https:\u002F\u002Fgithub.com\u002Fziqihuangg\u002FAwesome-Evaluation-of-Visual-Generation)，该项目收集了用于评估视觉生成模型的相关工作。\n\n我们的相关项目包括：[Evaluation Agent](https:\u002F\u002Fvchitect.github.io\u002FEvaluation-Agent-project\u002F)、[Uni-MMMU](https:\u002F\u002Fvchitect.github.io\u002FUni-MMMU-Project\u002F) 和 [WorldLens](https:\u002F\u002Fworldbench.github.io\u002Fworldlens)。\n\n```bibtex\n@InProceedings{zhang2024evaluationagent,\n      title = {Evaluation Agent: 针对视觉生成模型的高效且可提示式评估框架},\n      author = {Zhang, Fan and Tian, Shulin and Huang, Ziqi and Qiao, Yu and Liu, Ziwei},\n      booktitle={计算语言学协会年会 (ACL), 2025},\n      year = {2025}\n}\n\n@article{zou2025unimmmumassivemultidisciplinemultimodal,\n      title={{Uni-MMMU}: 一个大规模多学科多模态统一基准},\n      author = {Kai Zou and Ziqi Huang and Yuhao Dong and Shulin Tian and Dian Zheng and Hongbo Liu and Jingwen He and Bin Liu and Yu Qiao and Ziwei Liu},\n      journal={arXiv 预印本 arXiv:2510.13759},\n      year = {2025}\n}\n\n@article{worldlens,\n    title   = {{WorldLens}: 在真实世界中对驾驶类世界模型的全谱评估},\n    author  = {Ao Liang and Lingdong Kong and Tianyi Yan and Hongsi Liu and Wesley Yang and Ziqi Huang and Wei Yin and Jialong Zuo and Yixuan Hu and Dekai Zhu and Dongyue Lu and Youquan Liu and Guangfeng Jiang and Linfeng Li and Xiangtai Li and Long Zhuo and Lai Xing Ng and Benoit R. Cottereau and Changxin Gao and Liang Pan and Wei Tsang Ooi and Ziwei Liu},\n    journal = {arXiv 预印本 arXiv:2512.xxxxx},\n    year    = {2025}\n}\n```\n\n### 联系方式\n\n**如何联系我们：**\n- 代码问题：如遇到任何问题或 bug，请在我们的 GitHub 仓库中提交 [issue](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues)。\n- 评估请求：如需提交您的样本视频进行评估，请填写此 [Google 表单](https:\u002F\u002Fforms.gle\u002FwHk1xe7ecvVNj7yAA)。\n- 一般咨询：请查看我们的 [FAQ](https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fblob\u002Fmaster\u002FREADME-FAQ.md)，以获取常见问题解答。如有其他问题，请联系黄子琪，邮箱为 ZIQI002 [at] e [dot] ntu [dot] edu [dot] sg。\n\n\n\n\u003Ca name=\"citation_and_acknowledgement\">\u003C\u002Fa>\n## :black_nib: 引用\n\n   如果您认为我们的仓库对您的研究有所帮助，请考虑引用我们的论文：\n\n   ```bibtex\n    @InProceedings{huang2023vbench,\n        title={{VBench}: 视频生成模型的综合基准套件},\n        author={Huang, Ziqi and He, Yinan and Yu, Jiashuo and Zhang, Fan and Si, Chenyang and Jiang, Yuming and Zhang, Yuanhan and Wu, Tianxing and Jin, Qingyang and Chanpaisit, Nattapol and Wang, Yaohui and Chen, Xinyuan and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n        booktitle={IEEE\u002FCVF 计算机视觉与模式识别会议论文集},\n        year={2024}\n    }\n\n    @article{huang2025vbench++,\n        title={{VBench++}: 视频生成模型的全面且多功能基准套件},\n        author={Huang, Ziqi and Zhang, Fan and Xu, Xiaojie and He, Yinan and Yu, Jiashuo and Dong, Ziyue and Ma, Qianli and Chanpaisit, Nattapol and Si, Chenyang and Jiang, Yuming and Wang, Yaohui and Chen, Xinyuan and Chen, Ying-Cong and Wang, Limin and Lin, Dahua and Qiao, Yu and Liu, Ziwei},\n        journal={IEEE 模式分析与机器智能汇刊}, \n        year={2025},\n        doi={10.1109\u002FTPAMI.2025.3633890}\n    }\n\n    @article{zheng2025vbench2,\n        title={{VBench-2.0}: 推进视频生成基准套件以提升内在忠实度},\n        author={Zheng, Dian and Huang, Ziqi and Liu, Hongbo and Zou, Kai and He, Yinan and Zhang, Fan and Zhang, Yuanhan and He, Jingwen and Zheng, Wei-Shi and Qiao, Yu and Liu, Ziwei},\n        journal={arXiv 预印本 arXiv:2503.21755},\n        year={2025}\n    }\n   ```","# VBench 快速上手指南\n\nVBench 是一个全面的视频生成模型基准测试套件，支持从基础技术质量到内在真实性（Intrinsic Faithfulness）的多维度评估。本指南涵盖 VBench、VBench++ 及 VBench-2.0 的核心安装与使用流程。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 macOS。\n*   **Python 版本**: Python 3.8 或更高版本。\n*   **硬件要求**: 建议配备 NVIDIA GPU 以加速评估过程（部分评估指标依赖深度学习模型）。\n*   **前置依赖**:\n    *   `pip` (包管理工具)\n    *   `git` (代码克隆)\n    *   `ffmpeg` (视频处理必备，用于视频解码和帧提取)\n\n**安装 ffmpeg (Ubuntu\u002FDebian):**\n```bash\nsudo apt-get update && sudo apt-get install -y ffmpeg\n```\n\n**安装 ffmpeg (macOS):**\n```bash\nbrew install ffmpeg\n```\n\n## 安装步骤\n\n您可以通过 PyPI 直接安装稳定版，或从源码安装以获取最新功能。\n\n### 方式一：通过 PyPI 安装（推荐）\n\n这是最快捷的方式，适合大多数用户。\n\n```bash\npip install vbench\n```\n\n> **国内加速提示**: 如果下载速度较慢，建议使用清华或阿里镜像源：\n> ```bash\n> pip install vbench -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方式二：从源码安装（获取最新特性）\n\n如果您需要使用 VBench-2.0 的最新评估维度或特定分支功能，建议克隆仓库安装。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench.git\ncd VBench\npip install -e .\n```\n\n> **国内加速提示**: 克隆仓库时若速度慢，可使用 Gitee 镜像（如有）或配置 git 代理。\n\n## 基本使用\n\n安装完成后，您可以使用 Python 脚本调用 VBench 对生成的视频进行评估。以下是一个最简单的文本生成视频（Text-to-Video）评估示例。\n\n### 1. 准备数据\n确保您有一个包含生成视频的文件夹，以及对应的提示词（Prompt）列表。假设结构如下：\n```text\nmy_videos\u002F\n├── video_001.mp4\n├── video_002.mp4\n└── ...\nprompts.txt (每行一个提示词)\n```\n\n### 2. 运行评估\n\n创建一个 Python 脚本（例如 `run_eval.py`），输入以下代码：\n\n```python\nfrom vbench.api import Benchmark\n\n# 初始化 Benchmark\n# dimension_list: 选择要评估的维度，例如 'subject_consistency', 'background_consistency', 'aesthetic_quality' 等\n# 若不指定，默认评估所有维度\nbenchmark = Benchmark(\n    dimension_list=['subject_consistency', 'background_consistency', 'aesthetic_quality'],\n    chart_dir='.\u002Fresults\u002Fcharts',\n    video_dir='.\u002Fmy_videos',\n    prompt_path='.\u002Fprompts.txt'\n)\n\n# 运行评估\nresults = benchmark.run()\n\n# 打印结果\nprint(results)\n```\n\n运行脚本：\n```bash\npython run_eval.py\n```\n\n### 3. 进阶：使用 VBench-2.0 评估内在真实性\n\n如果您需要评估模型的物理规律遵循度、常识推理等高级能力（VBench-2.0 特性），请使用子模块路径：\n\n```python\n# 注意：VBench-2.0 的代码位于 VBench-2.0 目录下\nfrom VBench-2.0.vbench2.api import Benchmark as Benchmark2\n\nbenchmark_v2 = Benchmark2(\n    dimension_list=['physics_based_realism', 'human_motion'], # 示例维度\n    video_dir='.\u002Fmy_videos',\n    prompt_path='.\u002Fprompts.txt'\n)\n\nresults_v2 = benchmark_v2.run()\nprint(results_v2)\n```\n\n### 4. 查看结果\n评估完成后，结果将以字典形式返回，并在指定的 `chart_dir` 目录下生成可视化图表和详细报告。您也可以访问 [HuggingFace Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FVchitect\u002FVBench_Leaderboard) 对比您的模型与社区其他模型的表现。","某 AI 初创公司的算法团队正在研发一款面向短视频平台的文生视频模型，急需在发布前对多个迭代版本进行全方位的能力评估与横向对比。\n\n### 没有 VBench 时\n- **评估维度单一**：团队仅依赖人工主观打分或简单的像素级指标（如 FID），难以量化视频在“动态幅度”、“时空一致性”等关键维度的表现，导致优化方向模糊。\n- **缺乏统一标准**：不同研究员使用各自编写的脚本和提示词库进行测试，结果无法复现，团队内部对哪个版本更优争论不休，严重拖慢决策效率。\n- **人力成本高昂**：为了获得相对可靠的结论，需要组织大量人员进行长时间的视频观看与标注，耗费数天时间且容易因疲劳产生偏差。\n- **难以对齐人类偏好**：自建的评价体系往往与真实用户的观感脱节，模型在测试集得分高，但生成内容在实际应用中仍显得僵硬或不自然。\n\n### 使用 VBench 后\n- **多维深度洞察**：VBench 提供了涵盖主体一致性、背景质量、运动流畅度等 16 个维度的自动化评估，精准定位模型在“复杂动作生成”上的短板，指导针对性调优。\n- **标准化基准对比**：利用 VBench 统一的提示词套件和评测协议，团队能在同一标准下快速复现并对比不同版本模型，用数据说话，瞬间达成共识。\n- **自动化高效流程**：只需调用 API 即可批量完成数百个视频样本的评分，将原本需要数天的人工评估工作压缩至几小时内完成，大幅加速迭代周期。\n- **高度人类对齐**：VBench 的评分逻辑经过严格的人类偏好校准，其给出的高分视频在真实用户测试中确实获得了更高的满意度，确保了研发成果的商业价值。\n\nVBench 将模糊的主观感受转化为精确的多维数据，成为视频生成模型从实验室走向高质量落地的核心标尺。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FVchitect_VBench_9c7b090e.jpg","Vchitect","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FVchitect_291f114c.jpg","",null,"https:\u002F\u002Fvchitect.intern-ai.org.cn\u002F","https:\u002F\u002Fgithub.com\u002FVchitect",[80,84],{"name":81,"color":82,"percentage":83},"Python","#3572A5",97.5,{"name":85,"color":86,"percentage":87},"Shell","#89e051",2.5,1569,110,"2026-04-09T01:56:12","Apache-2.0","未说明",{"notes":94,"python":92,"dependencies":95},"README 提供的片段主要介绍了 VBench、VBench++ 和 VBench-2.0 的项目概述、论文引用及目录结构，未包含具体的安装步骤、环境配置要求或依赖库列表。文中提到该工具可通过 PyPI 安装（包名为 vbench），并提供了 Hugging Face Spaces 在线演示链接。具体的运行环境需求（如操作系统、GPU、内存、Python 版本等）在提供的文本中未明确提及，通常需参考项目中 'Installation' 章节的完整内容或 PyPI 页面详情。",[96],"vbench (PyPI)",[16,15,52,98],"其他",[100,101,102,103,104,105,106,107],"aigc","evaluation-kit","gen-ai","stable-diffusion","text-to-video","video-generation","benchmark","dataset","2026-03-27T02:49:30.150509","2026-04-10T07:43:47.600344",[111,116,121,126,131,136,140],{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},27345,"如何向 VBench Leaderboard 提交评估结果？需要提交什么格式的文件？","用户通常生成 16 个维度的 `*eval_results.json` 文件。提交时，应将这些 JSON 文件打包成一个 zip 目录进行上传。在填写表单时，\"source\" 字段可以填写团队名称（team name）。如果上传后结果显示为 0 或异常，请检查 JSON 格式是否正确，并确认是否完整上传了包含所有维度文件的 zip 包。","https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues\u002F30",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},27346,"运行 Human Anomaly 推理时报错 'KeyError: Adafactor is already registered' 或环境配置错误怎么办？","这通常是环境配置问题。建议执行以下步骤：\n1. 将 PyTorch 版本切换回 2.5.1。\n2. 确保 CUDA 版本正确设置为 11.8（可通过 `nvcc --version` 验证）。\n3. 重新完整安装整个环境，以确保 gcc 和 cuda 配置正确。维护者已在多台机器上验证此配置可正常工作。","https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues\u002F125",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},27347,"为什么我复现的 CogVideoX-2B 模型得分低于 Leaderboard 上的分数？","得分差异通常源于评估使用的视频文件或代码版本不同。官方已公开用于评估 CogVideoX-2B 的具体资源：\n1. 评估用视频下载地址：https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zuQ47Uvze4157o4YMta0Zqz9G8TdHcXZ\u002Fview?usp=share_link\n2. 评估代码路径：https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Ftree\u002Fmaster\u002Fvbench2_beta_long\n3. 使用的 Prompt 文件：`prompts\u002Fgpt_enhanced_prompts\u002Fall_dimension_longer.txt`\n请使用上述官方资源重新进行推理和评估以确保一致性。","https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues\u002F61",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},27348,"如何复现 AnimateDiffV2 的基准测试结果？官方使用的 LoRA 和推理步数是多少？","官方提供的链接实际上是 realistic checkpoint v5.1（UNET 权重），而非 LoRA 权重。若要复现接近基准的结果，请确保使用正确的 checkpoint 文件。关于具体的推理步数（inference steps）和其他详细设置，需参考官方代码库中的具体实现或等待维护者进一步披露详细参数，目前已知直接使用提供的 checkpoint 是关键。","https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues\u002F44",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},27349,"提交评估结果后，如何查看进度或删除错误的上传历史？","提交样本视频后，评估通常需要一定时间（例如一周内）完成，结果发布后会显示在 VBench Leaderboard 页面上。关于删除上传历史，目前界面可能不支持直接删除，若结果异常建议联系维护者或重新提交正确的文件覆盖。对于 \"source\" 字段显示为 \"user upload\" 的情况，请在提交表单中手动填写为团队名称。","https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues\u002F101",{"id":137,"question_zh":138,"answer_zh":139,"source_url":115},27350,"Leaderboard 上的 'Aesthetic Quality' 和 'Dynamic Degree' 分数似乎颠倒了，是怎么回事？","这是一个已知的问题，官方曾交换过这两个维度的数值。如果您发现自己的模型在这两个维度上的得分与预期不符（例如导致总分变低），请检查是否与官方最新修正后的逻辑一致。维护者已确认并修复了该数据映射问题，请以 Leaderboard 最新显示为准。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},27351,"可以直接输入视频片段（不带提示词）进行评估吗？支持图生视频吗？","VBench 主要设计用于评估文本生成视频（T2V）模型。关于直接输入视频片段或图生视频（Image-to-Video）的支持情况，需视具体评估维度而定。如果在运行过程中遇到显存溢出（OOM），可能是因为视频帧数过多，建议尝试缩短视频长度或减少帧数。具体支持细节建议查阅最新文档或根据报错信息调整输入规格。","https:\u002F\u002Fgithub.com\u002FVchitect\u002FVBench\u002Fissues\u002F13",[146,151,156],{"id":147,"version":148,"summary_zh":149,"released_at":150},180479,"v0.1.4","PyPI 版本：0.1.4\n## 错误修复\n- 修复 `aesthetic_quality` 维度中的 OOM 错误（1a213e4fd100025a58ecb7f08537202f0a3aef28）\n\n## 功能特性\n- 支持使用 `torchrun` 进行多 GPU 推理，用法请参见 README.md（fc4620b4d620114b77842c336f7a193c37e8e12e）\n- 新增 `vbench evaluate` 命令，支持启动多 GPU 推理（e3e66641826978192923ad19afefc78132853a36）","2024-09-03T10:42:04",{"id":152,"version":153,"summary_zh":154,"released_at":155},180480,"v0.1.2","## Bug 修复\n- 修复 imaging_quality 中高分辨率图像\u002F视频的预处理问题（46275d9、b484867）\n- 修复 PyPI 包中 VBench_full_info.json 文件未找到的问题（90f0973）\n\n## 功能新增\n- 添加 `filter_scope` 参数，用于选择参与静态视频过滤的视频（fb75b02）\n- 添加 `mode` 参数，以在三种不同的评估模式之间进行选择（02e2ee1）：\n  > custom_input：从 --prompt\u002F--prompt_file 标志接收输入提示，或直接从文件名中读取\n  > vbench_standard：使用 VBench 的标准提示集进行评估（默认）\n  > vbench_category：使用 VBench 的特定类别进行评估\n- 允许通过包含字典的文件指定提示进行评估，使用 `--prompt_file` 参数。（efaa6d3）\n\n## 其他变更\n- 在 `--dimension` 标志中接收维度列表，而非单个维度（91bece8）","2024-06-04T16:42:50",{"id":157,"version":158,"summary_zh":159,"released_at":160},180481,"v0.1.1","此版本包含：\n\n- 所有16个T2V评估维度的预训练模型权重\n- 用于T2V的VBench提示词集\n- 用于评估所有16个T2V维度的源代码\n- `vbench evaluate` 命令，用于在VBench提示词集上评估T2V（cc68b18）\n- `static_filter` 命令，用于过滤掉静态视频（针对时间闪烁维度）（fd3613a）","2024-06-05T15:31:53"]