[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-DAMO-NLP-SG--VideoLLaMA2":3,"tool-DAMO-NLP-SG--VideoLLaMA2":65},[4,17,27,36,44,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",153609,2,"2026-04-13T11:34:59",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],"图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[13,26,14,35],"视频",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":10,"last_commit_at":42,"category_tags":43,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,"2026-04-10T11:13:16",[26,52,35,53,14,54,15,13,55],"数据工具","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":62,"last_commit_at":63,"category_tags":64,"status":16},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[15,52,54],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":23,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":105,"github_topics":80,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":137},7244,"DAMO-NLP-SG\u002FVideoLLaMA2","VideoLLaMA2","VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs","VideoLLaMA2 是一款由达摩院开源的多模态大语言模型，专为深度理解视频内容而设计。它不仅能“看懂”视频中的画面变化，还能“听懂”其中的声音信息，从而实现对视频时空动态与音频语义的综合分析。\n\n传统视频分析模型往往难以同时处理复杂的画面运动轨迹和背景声音，导致在回答涉及时间顺序、动作因果或音画关联的问题时表现不佳。VideoLLaMA2 通过先进的时空建模技术和音频理解模块，有效解决了这一痛点，显著提升了在零样本视频问答、事件推理等任务上的准确率，并在多个权威评测基准中取得了领先成绩。\n\n这款工具非常适合人工智能研究人员、多模态算法开发者以及希望构建智能视频分析应用的技术团队使用。无论是需要探索前沿视频理解理论的研究者，还是致力于开发视频摘要、智能监控或交互式教育产品的工程师，都能从中获益。其独特的技术亮点在于将视觉的空间特征、时间演变与音频信号深度融合，使模型能像人类一样综合视听线索来理解视频故事。项目采用 Apache 2.0 协议开源，提供了预训练模型、演示平台及详细文档，方便用户快速上手实验与二次开发。","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_ae98d62b2f12.png\" width=\"150\" style=\"margin-bottom: 0.2;\"\u002F>\n\u003Cp>\n\n\u003Ch3 align=\"center\">\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07476\" style=\"color:#9C276A\">\nVideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs\u003C\u002Fa>\u003C\u002Fh3>\n\u003Ch5 align=\"center\"> If our project helps you, please give us a star ⭐ on GitHub to support us. 🙏🙏 \u003C\u002Fh2>\n\n\u003Ch5 align=\"center\">\n\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-AV--Demo-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2-AV)\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Demo-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2)\n[![hf_checkpoint](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Checkpoints-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FDAMO-NLP-SG\u002Fvideollama-2-6669b6b6f0493188305c87ed)\n[![hf_data](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-MSVC-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FDAMO-NLP-SG\u002FMulti-Source-Video-Captioning) \u003Cbr>\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-yellow)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fblob\u002Fmain\u002FLICENSE) \n[![Hits](https:\u002F\u002Fhits.seeyoufarm.com\u002Fapi\u002Fcount\u002Fincr\u002Fbadge.svg?url=https%3A%2F%2Fgithub.com%2FDAMO-NLP-SG%2FVideoLLaMA2&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Visitor&edge_flat=false)](https:\u002F\u002Fhits.seeyoufarm.com)\n[![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FDAMO-NLP-SG\u002FVideoLLaMA2?color=critical&label=Issues)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues?q=is%3Aopen+is%3Aissue)\n[![GitHub closed issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed\u002FDAMO-NLP-SG\u002FVideoLLaMA2?color=success&label=Issues)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues?q=is%3Aissue+is%3Aclosed)  \u003Cbr>\n[![hf_paper](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Paper%20In%20HF-red.svg)](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2406.07476)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2406.07476-AD1C18.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07476) \u003Cbr>\n\n\u003C\u002Fh5>\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fzero-shot-video-question-answer-on-egoschema-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fzero-shot-video-question-answer-on-egoschema-1?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fvideo-question-answering-on-perception-test)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-question-answering-on-perception-test?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fvideo-question-answering-on-mvbench)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-question-answering-on-mvbench?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fzero-shot-video-question-answer-on-video-mme-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fzero-shot-video-question-answer-on-video-mme-1?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fzero-shot-video-question-answer-on-video-mme)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fzero-shot-video-question-answer-on-video-mme?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n\n\u003Cdetails open>\u003Csummary>💡 Some other multimodal-LLM projects from our team may interest you ✨. \u003C\u002Fsummary>\u003Cp>\n\u003C!--  may -->\n\n> [**VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding**](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3) \u003Cbr>\n> Boqiang Zhang\u003Csup>* \u003C\u002Fsup>, Kehan Li\u003Csup>* \u003C\u002Fsup>, Zesen Cheng\u003Csup>* \u003C\u002Fsup>, Zhiqiang Hu\u003Csup>* \u003C\u002Fsup>, Yuqian Yuan\u003Csup>* \u003C\u002Fsup>, Guanzheng Chen\u003Csup>* \u003C\u002Fsup>, Sicong Leng\u003Csup>* \u003C\u002Fsup>, Yuming Jiang\u003Csup>* \u003C\u002Fsup>, Hang Zhang\u003Csup>* \u003C\u002Fsup>, Xin Li\u003Csup>* \u003C\u002Fsup>, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FVideoLLaMA3.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2501.13106-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13106) \u003Cbr>\n\n> [**Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding**](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideo-LLaMA) \u003Cbr>\n> Hang Zhang, Xin Li, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideo-LLaMA)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FVideo-LLaMA.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideo-LLaMA) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2306.02858-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02858) \u003Cbr>\n\n> [**VCD: Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16922) \u003Cbr>\n> Sicong Leng\u003Csup>* \u003C\u002Fsup>, Hang Zhang\u003Csup>* \u003C\u002Fsup>, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVCD)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FVCD.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVCD)  [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2311.16922-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16922) \u003Cbr>\n\n> [**The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12787) \u003Cbr>\n> Sicong Leng, Yun Xing, Zesen Cheng, Yang Zhou, Hang Zhang, Xin Li, Deli Zhao, Shijian Lu, Chunyan Miao, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FCMM)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FCMM.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FCMM)  [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2410.12787-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12787) \u003Cbr>\n\n> [**Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17243) \u003Cbr>\n> Zesen Cheng*, Hang Zhang*, Kehan Li*, Sicong Leng, Zhiqiang Hu, Fei Wu, Deli Zhao, Xin Li, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FInf-CLIP)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FInf-CLIP.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FInf-CLIP)  [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2410.17243-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17243) \u003Cbr>\n\n\u003C\u002Fp>\u003C\u002Fdetails>\n\n\u003Cdiv align=\"center\">\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fassets\u002F18526640\u002Fe0e7951c-f392-42ed-afad-b2c7984d3e38\" width=\"800\">\u003C\u002Fdiv>\n\n\n## 📰 News\n* **[2025.01.21]** 🚀🚀 We are excited to officially launch [VideoLLaMA3](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3), featuring enhanced performance across image and video benchmarks, along with a variety of easy-to-follow inference cookbooks. Try it out today! \n* **[2024.10.22]**  Release checkpoints of [VideoLLaMA2.1-7B-AV](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-AV). The audio_visual branch code can be seen here: https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Ftree\u002Faudio_visual.\n* **[2024.10.15]**  Release checkpoints of [VideoLLaMA2.1-7B-16F-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base) and [VideoLLaMA2.1-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F).\n* **[2024.08.14]**  Release checkpoints of [VideoLLaMA2-72B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B-Base) and [VideoLLaMA2-72B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B).\n* **[2024.07.30]**  Release checkpoints of [VideoLLaMA2-8x7B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B-Base) and [VideoLLaMA2-8x7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B).\n* **[2024.06.25]**  🔥🔥 As of Jun 25, our [VideoLLaMA2-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [MLVU Leaderboard](https:\u002F\u002Fgithub.com\u002FJUNJIE99\u002FMLVU?tab=readme-ov-file#trophy-mini-leaderboard).\n* **[2024.06.18]**  🔥🔥 As of Jun 18, our [VideoLLaMA2-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [VideoMME Leaderboard](https:\u002F\u002Fvideo-mme.github.io\u002Fhome_page.html#leaderboard).\n* **[2024.06.17]**  👋👋 Update technical report with the latest results and the missing references. If you have works closely related to VideoLLaMA 2 but not mentioned in the paper, feel free to let us know.  \n* **[2024.06.14]**  🔥🔥 [Online Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2) is available.\n* **[2024.06.03]**  Release training, evaluation, and serving codes of VideoLLaMA 2.\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_4e76077a2896.png\" width=\"800\" \u002F>\n\n## 🛠️ Requirements and Installation\nBasic Dependencies:\n* Python >= 3.8\n* Pytorch >= 2.2.0\n* CUDA Version >= 11.8\n* transformers == 4.40.0 (for reproducing paper results)\n* tokenizers == 0.19.1\n\n**[Online Mode]** Install required packages (better for development):\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\ncd VideoLLaMA2\npip install -r requirements.txt\npip install flash-attn==2.5.8 --no-build-isolation\n```\n\n**[Offline Mode]** Install VideoLLaMA2 as a Python package (better for direct use):\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\ncd VideoLLaMA2\npip install --upgrade pip  # enable PEP 660 support\npip install -e .\npip install flash-attn==2.5.8 --no-build-isolation\n```\n\n## 🚀 Main Results\n\n### Multi-Choice Video QA & Video Captioning\n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_704a59f871b2.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n###  Open-Ended Video QA\n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_f03ef48b6447.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n### Audio QA \n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_350bb65716d5.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n### Audio-Visual QA \n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_d977a55b772d.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n\n## :earth_americas: Model Zoo\n### Vision-only Checkpoints\n| Model Name     | Model Type | Visual Encoder | Language Decoder | # Training Frames |\n|:----------------|:------------:|:----------------|:------------------|:----------------:|\n| [VideoLLaMA2-7B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-Base)  | Base  | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 8 |\n| [VideoLLaMA2-7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B)  | Chat | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 8 |\n| [VideoLLaMA2-7B-16F-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F-Base)  | Base  | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 16 |\n| [VideoLLaMA2-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F)  | Chat | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 16 |\n| [VideoLLaMA2-8x7B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B-Base)  | Base | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1)  | 8 |\n| [VideoLLaMA2-8x7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B)  | Chat | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1)  | 8 |\n| [VideoLLaMA2-72B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B-Base)  | Base | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-72B-Instruct)  | 8 |\n| [VideoLLaMA2-72B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B)  | Chat | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-72B-Instruct)  | 8 |\n| [VideoLLaMA2.1-7B-16F-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base) | Base | [siglip-so400m-patch14-384](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-7B-Instruct)  | 16 |\n| [VideoLLaMA2.1-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F)  | Chat | [siglip-so400m-patch14-384](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-7B-Instruct)  | 16 |\n\n\n### Audio-Visual Checkpoints\n| Model Name     | Type | Audio Encoder | Language Decoder |\n|:-------------------|:----------------|:----------------|:------------------|\n| [VideoLLaMA2.1-7B-AV](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-AV)  | Chat | [Fine-tuned BEATs_iter3+(AS2M)(cpt2)](https:\u002F\u002F1drv.ms\u002Fu\u002Fs!AqeByhGUtINrgcpj8ujXH1YUtxooEg?e=E9Ncea) | [VideoLLaMA2.1-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F)  |\n\n\n\n## [🤗 Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2)\n\nIt is highly recommended to try our [online demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2) first.\n\nTo run a video-based LLM (Large Language Model) web demonstration on your device, you will first need to ensure that you have the necessary model checkpoints prepared, followed by adhering to the steps outlined to successfully launch the demo.\n\n### Single-model Version\n\n* Launch a gradio app directly ([VideoLLaMA2-7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B) is adopted by default):\n```bash\npython videollama2\u002Fserve\u002Fgradio_web_server_adhoc.py\n```\n\n### Multiple-model Version\n\n1. Launch a global controller\n```bash\ncd \u002Fpath\u002Fto\u002FVideoLLaMA2\npython -m videollama2.serve.controller --host 0.0.0.0 --port 10000\n```\n\n2. Launch a gradio webserver\n```bash\npython -m videollama2.serve.gradio_web_server --controller http:\u002F\u002Flocalhost:10000 --model-list-mode reload\n```\n\n3. Launch one or multiple model workers\n```bash\n#  export HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com  # If you are unable to access Hugging Face, try to uncomment this line.\npython -m videollama2.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path \u002FPATH\u002FTO\u002FMODEL1\npython -m videollama2.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40001 --worker http:\u002F\u002Flocalhost:40001 --model-path \u002FPATH\u002FTO\u002FMODEL2\npython -m videollama2.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40002 --worker http:\u002F\u002Flocalhost:40002 --model-path \u002FPATH\u002FTO\u002FMODEL3\n...\n```\n\n\n## 🗝️ Training & Evaluation\n\n### Quick Start\n\nTo facilitate further development on top of our codebase, we provide a quick-start guide on how to train a customized [VideoLLaMA2](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2) with [VideoLLaVA](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FVideo-LLaVA) dataset and evaluate the trained model on the mainstream video-llm benchmarks.\n\n1. Training Data Structure:\n```bash\nVideoLLaMA2\n├── datasets\n│   ├── videollava_pt\n|   |   ├── llava_image\u002F # Available at: https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17GYcE69FcJjjUM0e4Gad2w?pwd=9ga3 or https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QmFj2FcMAoWNCUyiUtdcW0-IOhLbOBcf?usp=drive_link\n|   |   ├── valley\u002F      # Available at: https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1jluOimE7mmihEBfnpwwCew?pwd=jyjz or https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QmFj2FcMAoWNCUyiUtdcW0-IOhLbOBcf?usp=drive_link\n|   |   └── valley_llavaimage.json # Available at: https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zGRyVSUMoczGq6cjQFmT0prH67bu2wXD\u002Fview, including 703K video-text and 558K image-text pairs\n│   ├── videollava_sft\n|   |   ├── llava_image_tune\u002F  # Available at: https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1l-jT6t_DlN5DTklwArsqGw?pwd=o6ko\n|   |   ├── videochatgpt_tune\u002F # Available at: https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10hJ_U7wVmYTUo75YHc_n8g?pwd=g1hf\n|   |   └── videochatgpt_llavaimage_tune.json # Available at: https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zGRyVSUMoczGq6cjQFmT0prH67bu2wXD\u002Fview, including 100K video-centric, 625K image-centric and 40K text-only conversations\n```\n2. Command:\n```bash\n# VideoLLaMA2-vllava pretraining\nbash scripts\u002Fvllava\u002Fpretrain.sh\n# VideoLLaMA2-vllava finetuning\nbash scripts\u002Fvllava\u002Ffinetune.sh\n```\n3. Evaluation Data Structure:\n```bash\nVideoLLaMA2\n├── eval\n│   ├── egoschema # Official website: https:\u002F\u002Fgithub.com\u002Fegoschema\u002FEgoSchema\n|   |   ├── good_clips_git\u002F # Available at: https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SS0VVz8rML1e5gWq7D7VtP1oxE2UtmhQ\n|   |   └── questions.json  # Available at: https:\u002F\u002Fgithub.com\u002Fegoschema\u002FEgoSchema\u002Fblob\u002Fmain\u002Fquestions.json\n│   ├── mvbench # Official website: https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FOpenGVLab\u002FMVBench\n|   |   ├── video\u002F\n|   |   |   ├── clever\u002F\n|   |   |   └── ...\n|   |   └── json\u002F\n|   |   |   ├── action_antonym.json\n|   |   |   └── ...\n│   ├── perception_test_mcqa # Official website: https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FOpenGVLab\u002FMVBench\n|   |   ├── videos\u002F # Available at: https:\u002F\u002Fstorage.googleapis.com\u002Fdm-perception-test\u002Fzip_data\u002Ftest_videos.zip\n|   |   └── mc_question_test.json # Download from https:\u002F\u002Fstorage.googleapis.com\u002Fdm-perception-test\u002Fzip_data\u002Fmc_question_test_annotations.zip\n│   ├── videomme # Official website: https:\u002F\u002Fvideo-mme.github.io\u002Fhome_page.html#leaderboard\n|   |   ├── test-00000-of-00001.parquet\n|   |   ├── videos\u002F\n|   |   └── subtitles\u002F\n│   ├── Activitynet_Zero_Shot_QA # Official website: https:\u002F\u002Fgithub.com\u002FMILVLG\u002Factivitynet-qa\n|   |   ├── all_test\u002F   # Available at: https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002FEatOpE7j68tLm2XAd0u6b8ABGGdVAwLMN6rqlDGM_DwhVA?e=90WIuW\n|   |   ├── test_q.json # Available at: https:\u002F\u002Fgithub.com\u002FMILVLG\u002Factivitynet-qa\u002Ftree\u002Fmaster\u002Fdataset\n|   |   └── test_a.json # Available at: https:\u002F\u002Fgithub.com\u002FMILVLG\u002Factivitynet-qa\u002Ftree\u002Fmaster\u002Fdataset\n│   ├── MSVD_Zero_Shot_QA # Official website: https:\u002F\u002Fgithub.com\u002Fxudejing\u002Fvideo-question-answering\n|   |   ├── videos\u002F     \n|   |   ├── test_q.json \n|   |   └── test_a.json\n│   ├── videochatgpt_gen # Official website: https:\u002F\u002Fgithub.com\u002Fmbzuai-oryx\u002FVideo-ChatGPT\u002Ftree\u002Fmain\u002Fquantitative_evaluation\n|   |   ├── Test_Videos\u002F # Available at: https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002FEatOpE7j68tLm2XAd0u6b8ABGGdVAwLMN6rqlDGM_DwhVA?e=90WIuW\n|   |   ├── Test_Human_Annotated_Captions\u002F # Available at: https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002F_layouts\u002F15\u002Fonedrive.aspx?id=%2Fpersonal%2Fhanoona%5Fbangalath%5Fmbzuai%5Fac%5Fae%2FDocuments%2FVideo%2DChatGPT%2FData%5FCode%5FModel%5FRelease%2FQuantitative%5FEvaluation%2Fbenchamarking%2FTest%5FHuman%5FAnnotated%5FCaptions%2Ezip&parent=%2Fpersonal%2Fhanoona%5Fbangalath%5Fmbzuai%5Fac%5Fae%2FDocuments%2FVideo%2DChatGPT%2FData%5FCode%5FModel%5FRelease%2FQuantitative%5FEvaluation%2Fbenchamarking&ga=1\n|   |   ├── generic_qa.json     # These three json files available at: https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002F_layouts\u002F15\u002Fonedrive.aspx?id=%2Fpersonal%2Fhanoona%5Fbangalath%5Fmbzuai%5Fac%5Fae%2FDocuments%2FVideo%2DChatGPT%2FData%5FCode%5FModel%5FRelease%2FQuantitative%5FEvaluation%2Fbenchamarking%2FBenchmarking%5FQA&ga=1\n|   |   ├── temporal_qa.json\n|   |   └── consistency_qa.json\n```\n4. Command:\n```bash\n# mvbench evaluation\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts\u002Feval\u002Feval_video_qa_mvbench.sh\n# activitynet-qa evaluation (need to set azure openai key\u002Fendpoint\u002Fdeployname)\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts\u002Feval\u002Feval_video_qa_mvbench.sh\n```\n\n### Data Format\n\nIf you want to train a video-llm on your data, you need to follow the procedures below to prepare the video\u002Fimage sft data:\n\n1. Suppose your data structure is like:\n```bash\nVideoLLaMA2\n├── datasets\n│   ├── custom_sft\n│   |   ├── images\n│   |   ├── videos\n|   |   └── custom.json\n```\n2. Then you should re-organize the annotated video\u002Fimage sft data according to the following format:\n```json\n[\n    {\n        \"id\": 0,\n        \"video\": \"images\u002Fxxx.jpg\",\n        \"conversations\": [\n            {\n                \"from\": \"human\",\n                \"value\": \"\u003Cimage>\\nWhat are the colors of the bus in the image?\"\n            },\n            {\n                \"from\": \"gpt\",\n                \"value\": \"The bus in the image is white and red.\"\n            },\n            ...\n        ],\n    }\n    {\n        \"id\": 1,\n        \"video\": \"videos\u002Fxxx.mp4\",\n        \"conversations\": [\n            {\n                \"from\": \"human\",\n                \"value\": \"\u003Cvideo>\\nWhat are the main activities that take place in the video?\"\n            },\n            {\n                \"from\": \"gpt\",\n                \"value\": \"The main activities that take place in the video are the preparation of camera equipment by a man, a group of men riding a helicopter, and a man sailing a boat through the water.\"\n            },\n            ...\n        ],\n    },\n    ...\n]\n```\n3. Modify the `scripts\u002Fcustom\u002Ffinetune.sh`:\n```bash\n...\n--data_path datasets\u002Fcustom_sft\u002Fcustom.json\n--data_folder datasets\u002Fcustom_sft\u002F\n--pretrain_mm_mlp_adapter CONNECTOR_DOWNLOAD_PATH (e.g., DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base)\n...\n```\n\n## 🤖 Inference\n\nVideo\u002FImage Inference:\n```python\nimport sys\nsys.path.append('.\u002F')\nfrom videollama2 import model_init, mm_infer\nfrom videollama2.utils import disable_torch_init\n\n\ndef inference():\n    disable_torch_init()\n\n    # Video Inference\n    modal = 'video'\n    modal_path = 'assets\u002Fcat_and_chicken.mp4' \n    instruct = 'What animals are in the video, what are they doing, and how does the video feel?'\n    # Reply:\n    # The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.\n\n    # Image Inference\n    modal = 'image'\n    modal_path = 'assets\u002Fsora.png'\n    instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'\n    # Reply:\n    # The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.\n\n    model_path = 'DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F'\n    # Base model inference (only need to replace model_path)\n    # model_path = 'DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base'\n    model, processor, tokenizer = model_init(model_path)\n    output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)\n\n    print(output)\n\nif __name__ == \"__main__\":\n    inference()\n```\n\n## 📑 Citation\n\nIf you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:\n```bibtex\n@article{damonlpsg2024videollama2,\n  title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},\n  author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},\n  journal={arXiv preprint arXiv:2406.07476},\n  year={2024},\n  url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07476}\n}\n\n@article{damonlpsg2023videollama,\n  title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},\n  author = {Zhang, Hang and Li, Xin and Bing, Lidong},\n  journal = {arXiv preprint arXiv:2306.02858},\n  year = {2023},\n  url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02858}\n}\n```\n\n## 👍 Acknowledgement\nThe codebase of VideoLLaMA 2 is adapted from [**LLaVA 1.5**](https:github.com\u002Fhaotian-liu\u002FLLaVA) and [**FastChat**](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat). We are also grateful for the following projects our VideoLLaMA 2 arise from:\n* [**LLaMA 2**](https:\u002F\u002Fgithub.com\u002Fmeta-llama\u002Fllama), [**Mistral-7B**](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fannouncing-mistral-7b\u002F), [**OpenAI CLIP**](https:\u002F\u002Fopenai.com\u002Findex\u002Fclip\u002F), [**Qwen2**](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FQwen\u002Fqwen2-6659360b33528ced941e557f), [**SigLIP**](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fgoogle\u002Fsiglip-659d5e62f0ae1a57ae0e83ba), [**Honeybee**](https:\u002F\u002Fgithub.com\u002Fkakaobrain\u002Fhoneybee).\n* [**Video-ChatGPT**](https:\u002F\u002Fgithub.com\u002Fmbzuai-oryx\u002FVideo-ChatGPT), [**Video-LLaVA**](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FVideo-LLaVA). \n* [**WebVid**](https:\u002F\u002Fgithub.com\u002Fm-bain\u002Fwebvid), [**Panda-70M**](https:\u002F\u002Fgithub.com\u002Fsnap-research\u002FPanda-70M), [**LanguageBind**](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FLanguageBind), [**InternVid**](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FInternVideo\u002Ftree\u002Fmain\u002FData\u002FInternVid).\n* [**VideoChat2**](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FAsk-Anything\u002Ftree\u002Fmain\u002Fvideo_chat2), [**Valley**](https:\u002F\u002Fgithub.com\u002FRupertLuo\u002FValley), [**VTimeLLM**](https:\u002F\u002Fgithub.com\u002Fhuangb23\u002FVTimeLLM), [**ShareGPT4V**](https:\u002F\u002Fsharegpt4v.github.io\u002F).\n* [**Magpie**](https:\u002F\u002Fgithub.com\u002Fmagpie-align\u002Fmagpie), [**ALLaVA**](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FALLaVA), [**AVInstruct**](https:\u002F\u002Fgithub.com\u002Frikeilong\u002FBay-CAT\u002Ftree\u002Fmain\u002FAVinstruct).\n\n\n## 🔒 License\n\nThis project is released under the Apache 2.0 license as found in the LICENSE file.\nThe service is a research preview intended for **non-commercial use ONLY**, subject to the model Licenses of LLaMA and Mistral, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please get in touch with us if you find any potential violations.\n","\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_ae98d62b2f12.png\" width=\"150\" style=\"margin-bottom: 0.2;\"\u002F>\n\u003Cp>\n\n\u003Ch3 align=\"center\">\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07476\" style=\"color:#9C276A\">\nVideoLLaMA 2：推进视频大模型中的时空建模与音频理解\u003C\u002Fa>\u003C\u002Fh3>\n\u003Ch5 align=\"center\"> 如果我们的项目对您有所帮助，请在 GitHub 上给我们点个赞 ⭐，以支持我们。 🙏🙏 \u003C\u002Fh2>\n\n\u003Ch5 align=\"center\">\n\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-AV--Demo-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2-AV)\n[![hf_space](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Demo-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2)\n[![hf_checkpoint](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Checkpoints-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FDAMO-NLP-SG\u002Fvideollama-2-6669b6b6f0493188305c87ed)\n[![hf_data](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-MSVC-9C276A.svg)](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FDAMO-NLP-SG\u002FMulti-Source-Video-Captioning) \u003Cbr>\n[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-yellow)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fblob\u002Fmain\u002FLICENSE) \n[![Hits](https:\u002F\u002Fhits.seeyoufarm.com\u002Fapi\u002Fcount\u002Fincr\u002Fbadge.svg?url=https%3A%2F%2Fgithub.com%2FDAMO-NLP-SG%2FVideoLLaMA2&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Visitor&edge_flat=false)](https:\u002F\u002Fhits.seeyoufarm.com)\n[![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FDAMO-NLP-SG\u002FVideoLLaMA2?color=critical&label=Issues)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues?q=is%3Aopen+is%3Aissue)\n[![GitHub closed issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed\u002FDAMO-NLP-SG\u002FVideoLLaMA2?color=success&label=Issues)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues?q=is%3Aissue+is%3Aclosed)  \u003Cbr>\n[![hf_paper](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Paper%20In%20HF-red.svg)](https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2406.07476)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2406.07476-AD1C18.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07476) \u003Cbr>\n\n\u003C\u002Fh5>\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fzero-shot-video-question-answer-on-egoschema-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fzero-shot-video-question-answer-on-egoschema-1?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fvideo-question-answering-on-perception-test)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-question-answering-on-perception-test?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fvideo-question-answering-on-mvbench)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fvideo-question-answering-on-mvbench?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fzero-shot-video-question-answer-on-video-mme-1)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fzero-shot-video-question-answer-on-video-mme-1?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fvideollama-2-advancing-spatial-temporal\u002Fzero-shot-video-question-answer-on-video-mme)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fzero-shot-video-question-answer-on-video-mme?p=videollama-2-advancing-spatial-temporal) \u003Cbr>\n\n\u003Cdetails open>\u003Csummary>💡 您可能对我们团队的其他多模态大模型项目感兴趣 ✨。\u003C\u002Fsummary>\u003Cp>\n\u003C!--  may -->\n\n> [**VideoLLaMA 3：面向图像与视频理解的前沿多模态基础模型**](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3) \u003Cbr>\n> Boqiang Zhang\u003Csup>* \u003C\u002Fsup>, Kehan Li\u003Csup>* \u003C\u002Fsup>, Zesen Cheng\u003Csup>* \u003C\u002Fsup>, Zhiqiang Hu\u003Csup>* \u003C\u002Fsup>, Yuqian Yuan\u003Csup>* \u003C\u002Fsup>, Guanzheng Chen\u003Csup>* \u003C\u002Fsup>, Sicong Leng\u003Csup>* \u003C\u002Fsup>, Yuming Jiang\u003Csup>* \u003C\u002Fsup>, Hang Zhang\u003Csup>* \u003C\u002Fsup>, Xin Li\u003Csup>* \u003C\u002Fsup>, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FVideoLLaMA3.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2501.13106-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.13106) \u003Cbr>\n\n> [**Video-LLaMA：用于视频理解的指令微调音视频语言模型**](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideo-LLaMA) \u003Cbr>\n> Hang Zhang, Xin Li, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideo-LLaMA)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FVideo-LLaMA.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideo-LLaMA) [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2306.02858-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02858) \u003Cbr>\n\n> [**VCD：通过视觉对比解码缓解大型视觉-语言模型中的对象幻觉问题**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16922) \u003Cbr>\n> Sicong Leng\u003Csup>* \u003C\u002Fsup>, Hang Zhang\u003Csup>* \u003C\u002Fsup>, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVCD)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FVCD.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVCD)  [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2311.16922-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.16922) \u003Cbr>\n\n> [**多模态的诅咒：评估大型多模态模型在语言、视觉和音频方面的幻觉现象**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12787) \u003Cbr>\n> Sicong Leng, Yun Xing, Zesen Cheng, Yang Zhou, Hang Zhang, Xin Li, Deli Zhao, Shijian Lu, Chunyan Miao, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FCMM)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FCMM.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FCMM)  [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2410.12787-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12787) \u003Cbr>\n\n> [**突破内存限制：近无限批量规模的对比损失**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17243) \u003Cbr>\n> Zesen Cheng*, Hang Zhang*, Kehan Li*, Sicong Leng, Zhiqiang Hu, Fei Wu, Deli Zhao, Xin Li, Lidong Bing \u003Cbr>\n[![github](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Github-black?logo=github)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FInf-CLIP)  [![github](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FDAMO-NLP-SG\u002FInf-CLIP.svg?style=social)](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FInf-CLIP)  [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FArxiv-2410.17243-b31b1b.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.17243) \u003Cbr>\n\n\u003C\u002Fp>\u003C\u002Fdetails>\n\n\u003Cdiv align=\"center\">\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fassets\u002F18526640\u002Fe0e7951c-f392-42ed-afad-b2c7984d3e38\" width=\"800\">\u003C\u002Fdiv>\n\n\n\n\n## 📰 新闻\n* **[2025.01.21]** 🚀🚀 我们很高兴地正式发布 [VideoLLaMA3](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA3)，它在图像和视频基准测试中性能全面提升，并附带多种易于操作的推理教程。立即试用吧！ \n* **[2024.10.22]** 发布 [VideoLLaMA2.1-7B-AV](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-AV) 的检查点。音频视觉分支的代码可在此查看：https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Ftree\u002Faudio_visual。\n* **[2024.10.15]** 发布 [VideoLLaMA2.1-7B-16F-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base) 和 [VideoLLaMA2.1-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F) 的检查点。\n* **[2024.08.14]** 发布 [VideoLLaMA2-72B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B-Base) 和 [VideoLLaMA2-72B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B) 的检查点。\n* **[2024.07.30]** 发布 [VideoLLaMA2-8x7B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B-Base) 和 [VideoLLaMA2-8x7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B) 的检查点。\n* **[2024.06.25]**  🔥🔥 截至6月25日，我们的 [VideoLLaMA2-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F) 在 [MLVU排行榜](https:\u002F\u002Fgithub.com\u002FJUNJIE99\u002FMLVU?tab=readme-ov-file#trophy-mini-leaderboard) 上是约70亿参数规模的视频大模型中的**第一名**。\n* **[2024.06.18]**  🔥🔥 截至6月18日，我们的 [VideoLLaMA2-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F) 在 [VideoMME排行榜](https:\u002F\u002Fvideo-mme.github.io\u002Fhome_page.html#leaderboard) 上是约70亿参数规模的视频大模型中的**第一名**。\n* **[2024.06.17]**  👋👋 更新技术报告，加入最新结果及遗漏的参考文献。如果您有与 VideoLLaMA 2 密切相关但未在论文中提及的工作，请随时告知我们。\n* **[2024.06.14]**  🔥🔥 [在线演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2) 已上线。\n* **[2024.06.03]**  发布 VideoLLaMA 2 的训练、评估和推理代码。\n\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_4e76077a2896.png\" width=\"800\" \u002F>\n\n## 🛠️ 环境要求与安装\n基础依赖：\n* Python >= 3.8\n* Pytorch >= 2.2.0\n* CUDA 版本 >= 11.8\n* transformers == 4.40.0（用于复现论文结果）\n* tokenizers == 0.19.1\n\n**[在线模式]** 安装所需包（更适合开发）：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\ncd VideoLLaMA2\npip install -r requirements.txt\npip install flash-attn==2.5.8 --no-build-isolation\n```\n\n**[离线模式]** 将 VideoLLaMA2 作为 Python 包安装（更适合直接使用）：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\ncd VideoLLaMA2\npip install --upgrade pip  # 启用 PEP 660 支持\npip install -e .\npip install flash-attn==2.5.8 --no-build-isolation\n```\n\n## 🚀 主要结果\n\n### 多选题视频问答与视频字幕生成\n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_704a59f871b2.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n### 开放式视频问答\n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_f03ef48b6447.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n### 音频问答 \n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_350bb65716d5.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n### 音频-视觉问答 \n\u003Cp>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_readme_d977a55b772d.png\" width=\"800\" \"\u002F>\u003C\u002Fp>\n\n\n## :earth_americas: 模型库\n### 仅视觉检查点\n| 模型名称     | 模型类型 | 视觉编码器 | 语言解码器 | 训练帧数 |\n|:----------------|:------------:|:----------------|:------------------|:----------------:|\n| [VideoLLaMA2-7B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-Base)  | 基础版  | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 8 |\n| [VideoLLaMA2-7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B)  | 对话版 | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 8 |\n| [VideoLLaMA2-7B-16F-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F-Base)  | 基础版  | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 16 |\n| [VideoLLaMA2-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B-16F)  | 对话版 | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMistral-7B-Instruct-v0.2)  | 16 |\n| [VideoLLaMA2-8x7B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B-Base)  | 基础版 | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1)  | 8 |\n| [VideoLLaMA2-8x7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-8x7B)  | 对话版 | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1)  | 8 |\n| [VideoLLaMA2-72B-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B-Base)  | 基础版 | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-72B-Instruct)  | 8 |\n| [VideoLLaMA2-72B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-72B)  | 对话版 | [clip-vit-large-patch14-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-72B-Instruct)  | 8 |\n| [VideoLLaMA2.1-7B-16F-Base](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base) | 基础版 | [siglip-so400m-patch14-384](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-7B-Instruct)  | 16 |\n| [VideoLLaMA2.1-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F)  | 对话版 | [siglip-so400m-patch14-384](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2-7B-Instruct)  | 16 |\n\n\n### 音频-视觉检查点\n| 模型名称     | 类型 | 音频编码器 | 语言解码器 |\n|:-------------------|:----------------|:----------------|:------------------|\n| [VideoLLaMA2.1-7B-AV](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-AV)  | 对话版 | [微调后的 BEATs_iter3+(AS2M)(cpt2)](https:\u002F\u002F1drv.ms\u002Fu\u002Fs!AqeByhGUtINrgcpj8ujXH1YUtxooEg?e=E9Ncea) | [VideoLLaMA2.1-7B-16F](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F)  |\n\n## [🤗 演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2)\n\n强烈建议您先尝试我们的[在线演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Flixin4ever\u002FVideoLLaMA2)。\n\n要在您的设备上运行基于视频的大型语言模型（LLM）Web 演示，您首先需要确保已准备好必要的模型检查点，然后按照以下步骤操作，即可成功启动演示。\n\n### 单模型版本\n\n* 直接启动 Gradio 应用程序（默认采用 [VideoLLaMA2-7B](https:\u002F\u002Fhuggingface.co\u002FDAMO-NLP-SG\u002FVideoLLaMA2-7B)）：\n```bash\npython videollama2\u002Fserve\u002Fgradio_web_server_adhoc.py\n```\n\n### 多模型版本\n\n1. 启动全局控制器\n```bash\ncd \u002Fpath\u002Fto\u002FVideoLLaMA2\npython -m videollama2.serve.controller --host 0.0.0.0 --port 10000\n```\n\n2. 启动 Gradio Web 服务器\n```bash\npython -m videollama2.serve.gradio_web_server --controller http:\u002F\u002Flocalhost:10000 --model-list-mode reload\n```\n\n3. 启动一个或多个模型工作节点\n```bash\n#  export HF_ENDPOINT=https:\u002F\u002Fhf-mirror.com  # 如果无法访问 Hugging Face，请尝试取消注释此行。\npython -m videollama2.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path \u002FPATH\u002FTO\u002FMODEL1\npython -m videollama2.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40001 --worker http:\u002F\u002Flocalhost:40001 --model-path \u002FPATH\u002FTO\u002FMODEL2\npython -m videollama2.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40002 --worker http:\u002F\u002Flocalhost:40002 --model-path \u002FPATH\u002FTO\u002FMODEL3\n...\n```\n\n\n## 🗝️ 训练与评估\n\n### 快速入门\n\n为了便于在我们的代码库基础上进行进一步开发，我们提供了一份快速入门指南，介绍如何使用 [VideoLLaVA](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FVideo-LLaVA) 数据集训练自定义的 [VideoLLaMA2](https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2)，并在主流视频 LLM 基准测试上评估训练好的模型。\n\n1. 训练数据结构：\n```bash\nVideoLLaMA2\n├── datasets\n│   ├── videollava_pt\n|   |   ├── llava_image\u002F # 可通过以下链接获取：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17GYcE69FcJjjUM0e4Gad2w?pwd=9ga3 或 https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QmFj2FcMAoWNCUyiUtdcW0-IOhLbOBcf?usp=drive_link\n|   |   ├── valley\u002F      # 可通过以下链接获取：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1jluOimE7mmihEBfnpwwCew?pwd=jyjz 或 https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1QmFj2FcMAoWNCUyiUtdcW0-IOhLbOBcf?usp=drive_link\n|   |   └── valley_llavaimage.json # 可通过以下链接获取：https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zGRyVSUMoczGq6cjQFmT0prH67bu2wXD\u002Fview，包含 703K 个视频-文本对和 558K 个图像-文本对\n│   ├── videollava_sft\n|   |   ├── llava_image_tune\u002F  # 可通过以下链接获取：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1l-jT6t_DlN5DTklwArsqGw?pwd=o6ko\n|   |   ├── videochatgpt_tune\u002F # 可通过以下链接获取：https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10hJ_U7wVmYTUo75YHc_n8g?pwd=g1hf\n|   |   └── videochatgpt_llavaimage_tune.json # 可通过以下链接获取：https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1zGRyVSUMoczGq6cjQFmT0prH67bu2wXD\u002Fview，包含 100K 个以视频为中心、625K 个以图像为中心以及 40K 个纯文本对话\n```\n2. 命令：\n```bash\n# VideoLLaMA2-vllava 预训练\nbash scripts\u002Fvllava\u002Fpretrain.sh\n# VideoLLaMA2-vllava 微调\nbash scripts\u002Fvllava\u002Ffinetune.sh\n```\n3. 评估数据结构：\n```bash\nVideoLLaMA2\n├── eval\n│   ├── egoschema # 官方网站：https:\u002F\u002Fgithub.com\u002Fegoschema\u002FEgoSchema\n|   |   ├── good_clips_git\u002F # 可通过以下链接获取：https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1SS0VVz8rML1e5gWq7D7VtP1oxE2UtmhQ\n|   |   └── questions.json  # 可通过以下链接获取：https:\u002F\u002Fgithub.com\u002Fegoschema\u002FEgoSchema\u002Fblob\u002Fmain\u002Fquestions.json\n│   ├── mvbench # 官方网站：https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FOpenGVLab\u002FMVBench\n|   |   ├── video\u002F\n|   |   |   ├── clever\u002F\n|   |   |   └── ...\n|   |   └── json\u002F\n|   |   |   ├── action_antonym.json\n|   |   |   └── ...\n│   ├── perception_test_mcqa # 官方网站：https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FOpenGVLab\u002FMVBench\n|   |   ├── videos\u002F # 可通过以下链接获取：https:\u002F\u002Fstorage.googleapis.com\u002Fdm-perception-test\u002Fzip_data\u002Ftest_videos.zip\n|   |   └── mc_question_test.json # 可从以下链接下载：https:\u002F\u002Fstorage.googleapis.com\u002Fdm-perception-test\u002Fzip_data\u002Fmc_question_test_annotations.zip\n│   ├── videomme # 官方网站：https:\u002F\u002Fvideo-mme.github.io\u002Fhome_page.html#leaderboard\n|   |   ├── test-00000-of-00001.parquet\n|   |   ├── videos\u002F\n|   |   └── subtitles\u002F\n│   ├── Activitynet_Zero_Shot_QA # 官方网站：https:\u002F\u002Fgithub.com\u002FMILVLG\u002Factivitynet-qa\n|   |   ├── all_test\u002F   # 可通过以下链接获取：https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002FEatOpE7j68tLm2XAd0u6b8ABGGdVAwLMN6rqlDGM_DwhVA?e=90WIuW\n|   |   ├── test_q.json # 可通过以下链接获取：https:\u002F\u002Fgithub.com\u002FMILVLG\u002Factivitynet-qa\u002Ftree\u002Fmaster\u002Fdataset\n|   |   └── test_a.json # 可通过以下链接获取：https:\u002F\u002Fgithub.com\u002FMILVLG\u002Factivitynet-qa\u002Ftree\u002Fmaster\u002Fdataset\n│   ├── MSVD_Zero_Shot_QA # 官方网站：https:\u002F\u002Fgithub.com\u002Fxudejing\u002Fvideo-question-answering\n|   |   ├── videos\u002F     \n|   |   ├── test_q.json \n|   |   └── test_a.json\n│   ├── videochatgpt_gen # 官方网站：https:\u002F\u002Fgithub.com\u002Fmbzuai-oryx\u002FVideo-ChatGPT\u002Ftree\u002Fmain\u002Fquantitative_evaluation\n|   |   ├── Test_Videos\u002F # 可通过以下链接获取：https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002FEatOpE7j68tLm2XAd0u6b8ABGGdVAwLMN6rqlDGM_DwhVA?e=90WIuW\n|   |   ├── Test_Human_Annotated_Captions\u002F # 可通过以下链接获取：https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002F_layouts\u002F15\u002Fonedrive.aspx?id=%2Fpersonal%2Fhanoona%5Fbangalath%5Fmbzuai%5Fac%5Fae%2FDocuments%2FVideo%2DChatGPT%2FData%5FCode%5FModel%5FRelease%2FQuantitative%5FEvaluation%2Fbenchamarking%2FTest%5FHuman%5FAnnotated%5FCaptions%2Ezip&parent=%2Fpersonal%2Fhanoona%5Fbangalath%5Fmbzuai%5Fac%5Fae%2FDocuments%2FVideo%2DChatGPT%2FData%5FCode%5FModel%5FRelease%2FQuantitative%5FEvaluation%2Fbenchamarking&ga=1\n|   |   ├── generic_qa.json     # 这三份 JSON 文件可通过以下链接获取：https:\u002F\u002Fmbzuaiac-my.sharepoint.com\u002Fpersonal\u002Fhanoona_bangalath_mbzuai_ac_ae\u002F_layouts\u002F15\u002Fonedrive.aspx?id=%2Fpersonal%2Fhanoona%5Fbangalath%5Fmbzuai%5Fac%5Fae%2FDocuments%2FVideo%2DChatGPT%2FData%5FCode%5FModel%5FRelease%2FQuantitative%5FEvaluation%2Fbenchamarking%2FBenchmarking%5FQA&ga=1\n|   |   ├── temporal_qa.json\n|   |   └── consistency_qa.json\n```\n4. 命令：\n```bash\n# MVBench 评估\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts\u002Feval\u002Feval_video_qa_mvbench.sh\n# Activitynet-qa 评估（需设置 Azure OpenAI 的密钥\u002F端点\u002F部署名称）\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts\u002Feval\u002Feval_video_qa_mvbench.sh\n```\n\n### 数据格式\n\n如果您想使用自己的数据训练视频-LLM，需要按照以下步骤准备视频\u002F图像的SFT数据：\n\n1. 假设您的数据结构如下：\n```bash\nVideoLLaMA2\n├── datasets\n│   ├── custom_sft\n│   |   ├── images\n│   |   ├── videos\n|   |   └── custom.json\n```\n2. 然后您需要按照以下格式重新组织标注好的视频\u002F图像SFT数据：\n```json\n[\n    {\n        \"id\": 0,\n        \"video\": \"images\u002Fxxx.jpg\",\n        \"conversations\": [\n            {\n                \"from\": \"human\",\n                \"value\": \"\u003Cimage>\\n图片中的公交车是什么颜色？\"\n            },\n            {\n                \"from\": \"gpt\",\n                \"value\": \"图片中的公交车是白色和红色的。\"\n            },\n            ...\n        ],\n    }\n    {\n        \"id\": 1,\n        \"video\": \"videos\u002Fxxx.mp4\",\n        \"conversations\": [\n            {\n                \"from\": \"human\",\n                \"value\": \"\u003Cvideo>\\n视频中主要发生了哪些活动？\"\n            },\n            {\n                \"from\": \"gpt\",\n                \"value\": \"视频中主要发生的活动包括一名男子在准备摄像设备、一群男子乘坐直升机，以及一名男子划船穿越水面。\"\n            },\n            ...\n        ],\n    },\n    ...\n]\n```\n3. 修改 `scripts\u002Fcustom\u002Ffinetune.sh`：\n```bash\n...\n--data_path datasets\u002Fcustom_sft\u002Fcustom.json\n--data_folder datasets\u002Fcustom_sft\u002F\n--pretrain_mm_mlp_adapter CONNECTOR_DOWNLOAD_PATH (例如：DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base)\n...\n```\n\n## 🤖 推理\n\n视频\u002F图像推理：\n```python\nimport sys\nsys.path.append('.\u002F')\nfrom videollama2 import model_init, mm_infer\nfrom videollama2.utils import disable_torch_init\n\n\ndef inference():\n    disable_torch_init()\n\n    # 视频推理\n    modal = 'video'\n    modal_path = 'assets\u002Fcat_and_chicken.mp4' \n    instruct = '视频里有哪些动物？它们在做什么？这段视频给人什么感觉？'\n    # 回答：\n    # 视频中有一只小猫和一只小鸡在一起玩耍。小猫躺在地板上，而小鸡则蹦蹦跳跳地四处走动。两只动物之间互动得很有趣，整段视频给人一种可爱又温馨的感觉。\n\n    # 图像推理\n    modal = 'image'\n    modal_path = 'assets\u002Fsora.png'\n    instruct = '图中的女士穿了什么？她在做什么？这张图片给人什么感觉？'\n    # 回答：\n    # 图中女士身穿黑色外套和太阳镜，正走在被雨水浸湿的城市街道上。画面充满活力与生机，明亮的城市灯光映照在湿漉漉的路面上，营造出一种极具视觉吸引力的氛围。女士的存在为这幅场景增添了时尚与自信的气息，她从容地穿梭于繁忙的都市环境中。\n\n    model_path = 'DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F'\n    # 基础模型推理（只需替换model_path）\n    # model_path = 'DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-16F-Base'\n    model, processor, tokenizer = model_init(model_path)\n    output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)\n\n    print(output)\n\nif __name__ == \"__main__\":\n    inference()\n```\n\n## 📑 引用\n\n如果您在研究和应用中使用了VideoLLaMA，请使用以下BibTeX格式引用：\n```bibtex\n@article{damonlpsg2024videollama2,\n  title={VideoLLaMA 2: 在视频-LLM中推进时空建模和音频理解},\n  author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},\n  journal={arXiv预印本 arXiv:2406.07476},\n  year={2024},\n  url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.07476}\n}\n\n@article{damonlpsg2023videollama,\n  title = {Video-LLaMA：用于视频理解的指令微调视听语言模型},\n  author = {Zhang, Hang and Li, Xin and Bing, Lidong},\n  journal = {arXiv预印本 arXiv:2306.02858},\n  year = {2023},\n  url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.02858}\n}\n```\n\n## 👍 致谢\nVideoLLaMA 2的代码库改编自[**LLaVA 1.5**](https:github.com\u002Fhaotian-liu\u002FLLaVA)和[**FastChat**](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat)。我们还要感谢以下项目，正是它们促成了VideoLLaMA 2的诞生：\n* [**LLaMA 2**](https:\u002F\u002Fgithub.com\u002Fmeta-llama\u002Fllama)、[**Mistral-7B**](https:\u002F\u002Fmistral.ai\u002Fnews\u002Fannouncing-mistral-7b\u002F)、[**OpenAI CLIP**](https:\u002F\u002Fopenai.com\u002Findex\u002Fclip\u002F)、[**Qwen2**](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FQwen\u002Fqwen2-6659360b33528ced941e557f)、[**SigLIP**](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002Fgoogle\u002Fsiglip-659d5e62f0ae1a57ae0e83ba)、[**Honeybee**](https:\u002F\u002Fgithub.com\u002Fkakaobrain\u002Fhoneybee)。\n* [**Video-ChatGPT**](https:\u002F\u002Fgithub.com\u002Fmbzuai-oryx\u002FVideo-ChatGPT)、[**Video-LLaVA**](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FVideo-LLaVA)。\n* [**WebVid**](https:\u002F\u002Fgithub.com\u002Fm-bain\u002Fwebvid)、[**Panda-70M**](https:\u002F\u002Fgithub.com\u002Fsnap-research\u002FPanda-70M)、[**LanguageBind**](https:\u002F\u002Fgithub.com\u002FPKU-YuanGroup\u002FLanguageBind)、[**InternVid**](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FInternVideo\u002Ftree\u002Fmain\u002FData\u002FInternVid)。\n* [**VideoChat2**](https:\u002F\u002Fgithub.com\u002FOpenGVLab\u002FAsk-Anything\u002Ftree\u002Fmain\u002Fvideo_chat2)、[**Valley**](https:\u002F\u002Fgithub.com\u002FRupertLuo\u002FValley)、[**VTimeLLM**](https:\u002F\u002Fgithub.com\u002Fhuangb23\u002FVTimeLLM)、[**ShareGPT4V**](https:\u002F\u002Fsharegpt4v.github.io\u002F)。\n* [**Magpie**](https:\u002F\u002Fgithub.com\u002Fmagpie-align\u002Fmagpie)、[**ALLaVA**](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FALLaVA)、[**AVInstruct**](https:\u002F\u002Fgithub.com\u002Frikeilong\u002FBay-CAT\u002Ftree\u002Fmain\u002FAVinstruct)。\n\n## 🔒 许可证\n\n本项目采用Apache 2.0许可证发布，具体见LICENSE文件。该服务为研究预览版，仅限**非商业用途**，同时需遵守LLaMA和Mistral的模型许可、OpenAI生成数据的使用条款以及ShareGPT的隐私政策。如发现任何潜在违规行为，请及时与我们联系。","# VideoLLaMA2 快速上手指南\n\nVideoLLaMA2 是一款先进的视频 - 语言大模型，专注于提升视频的时空建模能力和音频理解能力。本指南将帮助开发者快速完成环境配置并运行模型。\n\n## 1. 环境准备\n\n在开始之前，请确保您的系统满足以下硬件和软件要求：\n\n*   **操作系统**: Linux (推荐)\n*   **Python**: >= 3.8\n*   **PyTorch**: >= 2.2.0\n*   **CUDA**: >= 11.8\n*   **显存建议**: 运行 7B 模型建议至少 16GB GPU 显存；运行更大模型（如 72B 或 MoE 版本）需要多卡或更高显存。\n\n**核心依赖版本说明：**\n为了复现论文结果，官方推荐以下特定版本：\n*   `transformers == 4.40.0`\n*   `tokenizers == 0.19.1`\n\n> **国内加速提示**：建议使用国内镜像源安装依赖，以提升下载速度。\n> 例如使用清华源：`pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 2. 安装步骤\n\n您可以选择**在线模式**（适合开发和修改代码）或**离线模式**（适合作为包直接调用）。以下以推荐的**在线模式**为例：\n\n### 第一步：克隆项目\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\ncd VideoLLaMA2\n```\n\n### 第二步：安装基础依赖\n```bash\n# 推荐使用国内镜像加速\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 第三步：安装 Flash Attention\nVideoLLaMA2 依赖 `flash-attn` 进行高效推理，需单独安装：\n```bash\npip install flash-attn==2.5.8 --no-build-isolation\n```\n*注意：如果编译失败，请确保已正确安装 CUDA Toolkit 且版本匹配。*\n\n---\n*若需作为 Python 包安装（离线模式），可执行：*\n```bash\npip install --upgrade pip\npip install -e .\npip install flash-attn==2.5.8 --no-build-isolation\n```\n\n## 3. 基本使用\n\n安装完成后，您可以加载预训练模型进行推理。以下是基于 Hugging Face `transformers` 库的最简使用示例。\n\n### 前置准备：下载模型\n请从 [Hugging Face Model Zoo](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FDAMO-NLP-SG\u002Fvideollama-2-6669b6b6f0493188305c87ed) 下载您需要的模型权重（例如 `VideoLLaMA2-7B`）。\n*国内用户可使用镜像站下载或使用 `hf-mirror` 工具。*\n\n### 推理代码示例\n\n```python\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nfrom videollama2.conversation import get_conv_template\nfrom videollama2.mm_utils import process_images, tokenizer_image_token\nfrom videollama2.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN\n\n# 1. 配置模型路径\nmodel_path = \"DAMO-NLP-SG\u002FVideoLLaMA2-7B\"  # 替换为您本地的模型路径\n\n# 2. 加载模型和分词器\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_path,\n    torch_dtype=torch.bfloat16,  # 建议使用 bfloat16 以节省显存\n    device_map=\"auto\",\n    trust_remote_code=True\n)\nmodel.eval()\n\n# 3. 准备输入 (视频帧列表 和 文本问题)\n# 注意：实际使用时需使用视频处理工具提取帧并转换为 Tensor\n# 此处 video_tensors 假设为已经处理好的视频帧张量列表 [frame1, frame2, ...]\n# prompt 为用户提出的问题\nvideo_tensors = [...] \nprompt = \"请描述这个视频中发生了什么？\"\n\n# 构建对话模板\nconv = get_conv_template(\"videollama2\")\nconv.append_message(conv.roles[0], DEFAULT_IMAGE_TOKEN + \"\\n\" + prompt)\nconv.append_message(conv.roles[1], None)\nquery = conv.get_prompt()\n\n# 4. 推理生成\ninput_ids = tokenizer_image_token(query, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()\n\nwith torch.inference_mode():\n    output_ids = model.generate(\n        input_ids,\n        images=video_tensors,\n        do_sample=True,\n        temperature=0.2,\n        max_new_tokens=1024,\n        use_cache=True\n    )\n\n# 5. 解码输出\noutput = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0]\nprint(output)\n```\n\n> **提示**：完整的视频预处理（帧提取、缩放、归一化）逻辑请参考仓库中的 `demo` 或 `eval` 脚本，以确保输入格式与模型训练时一致。","某电商平台的客服团队每天需处理大量用户上传的“产品使用反馈视频”，这些视频包含用户操作演示和口头抱怨，人工审核效率极低且容易遗漏关键细节。\n\n### 没有 VideoLLaMA2 时\n- 审核人员必须全程观看冗长视频，无法快速定位用户具体在哪个时间点指出了产品缺陷，耗时费力。\n- 视频中的背景噪音或用户语速过快导致关键语音投诉（如“按钮没反应”）被忽略，仅靠画面无法理解完整意图。\n- 难以区分是用户操作失误还是产品本身的空间结构问题（如接口位置设计不合理），导致误判责任归属。\n- 面对海量视频数据，无法批量提取结构化报告，只能依靠人工逐条记录，严重拖慢产品迭代反馈周期。\n\n### 使用 VideoLLaMA2 后\n- VideoLLaMA2 能精准分析时空上下文，自动截取用户指出问题的关键片段，并生成带时间戳的文字摘要，审核效率提升数倍。\n- 凭借强大的音频理解能力，它能准确转录并关联用户的语音抱怨与对应画面动作，确保“听得到”也“看得懂”每一条反馈。\n- 模型可深入推理空间关系，智能判断是用户操作不当还是产品设计存在空间布局缺陷，提供客观的责任分析建议。\n- 支持批量处理视频流，直接输出包含问题分类、严重程度及证据片段的结构化报表，无缝对接产品研发管理系统。\n\nVideoLLaMA2 通过将复杂的视听信息转化为可执行的洞察，彻底重构了视频反馈的处理流程，让产品优化决策从“凭经验猜测”转向“数据驱动”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FDAMO-NLP-SG_VideoLLaMA2_1e35b369.png","DAMO-NLP-SG","Language Technology Lab at Alibaba DAMO Academy","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FDAMO-NLP-SG_7176372c.png","",null,"https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",90.9,{"name":88,"color":89,"percentage":90},"Shell","#89e051",9.1,1291,87,"2026-04-13T02:17:57","Apache-2.0","未说明","必需 NVIDIA GPU，CUDA 版本 >= 11.8，需支持 flash-attn (通常建议显存 16GB+ 以运行 7B 模型，72B 或 MoE 模型需要更高显存)",{"notes":98,"python":99,"dependencies":100},"安装 flash-attn 时需添加 '--no-build-isolation' 参数。项目提供在线开发模式（requirements.txt）和离线包模式（pip install -e .）两种安装方式。模型包含多种尺寸（7B, 8x7B, 72B），请根据显卡显存选择合适的检查点。",">= 3.8",[101,102,103,104],"torch>=2.2.0","transformers==4.40.0","tokenizers==0.19.1","flash-attn==2.5.8",[15,35,55,54],"2026-03-27T02:49:30.150509","2026-04-14T04:34:58.587518",[109,114,119,123,128,133],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},32525,"如何使用微调后的 LoRA 权重进行推理？","最新版本的代码支持直接加载 LoRA 模型进行推理，无需手动合并权重。只需将 `model_init` 函数中的 `model_path` 指向你的 LoRA 微调输出目录即可。示例代码如下：\n\n```python\nfrom videollama2 import model_init, mm_infer\nfrom videollama2.utils import disable_torch_init\n\ndisable_torch_init()\n\nmodal = 'video' # 或 'image'\nmodal_path = 'path\u002Fto\u002Fyour\u002Fvideo.mp4'\ninstruct = '你的问题指令'\nmodel_path = 'path\u002Fto\u002Fyour\u002Flora\u002Ffinetune\u002Foutput\u002Fdir' # 指向包含 adapter_config.json 等文件的目录\n\nmodel, processor, tokenizer = model_init(model_path)\noutput = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)\nprint(output)\n```\n确保使用的是支持该功能的最新代码提交版本。","https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues\u002F83",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},32526,"运行音视频问答（AVQA）模型时出现关于 vision tower 参数未找到的警告，这正常吗？","这是正常现象，可以忽略该警告。因为官方并未对 vision tower 进行微调，其参数是单独从 `siglip-so400m-patch14-384` 预训练模型中加载的，而不是包含在 `DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-AV` 的检查点文件中。因此加载时会报告相关警告，但不影响模型的正常运行和推理结果。","https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues\u002F119",{"id":120,"question_zh":121,"answer_zh":122,"source_url":118},32527,"如何运行音视频联合推理（Audio-Visual Inference）？","可以使用以下脚本进行音视频联合推理。关键在于设置 `va=True` 来处理同时包含音频和视频的文件，并根据需要选择模态。\n\n```python\nimport sys\nsys.path.append('.\u002F')\nfrom videollama2 import model_init, mm_infer\nfrom videollama2.utils import disable_torch_init\n\ndisable_torch_init()\n\n# 初始化模型，使用支持音视频的模型路径\nmodel_path = 'DAMO-NLP-SG\u002FVideoLLaMA2.1-7B-AV'\nmodel, processor, tokenizer = model_init(model_path)\n\n# 准备音视频数据\naudio_video_path = \"assets\u002Fsample_av.mp4\"\n# va=True 表示同时处理视频和音频\naudio_video_tensor = processor['video'](audio_video_path, va=True)\n\nquestion = \"请结合音频信息描述这个视频。\"\n\noutput = mm_infer(\n    audio_video_tensor,\n    question,\n    model=model,\n    tokenizer=tokenizer,\n    modal='video', # 模态通常设为 video，内部会处理音频\n    do_sample=False,\n)\nprint(output)\n```\n如果遇到环境报错，建议重建虚拟环境并确保所有依赖（如 decord, torchaudio 等）已正确安装。",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},32528,"评估 AVQA 任务时，复现的准确率远低于论文报告值（例如只有 65% 而非 80%），可能的原因是什么？","准确率偏低通常与数据集预处理或测试样本数量不一致有关。官方 Music-AVQA 数据集的测试集包含 9185 个样本。如果你的评估结果总响应数（Total Responses）少于这个数字（例如 9129），说明部分样本被错误丢弃或未正确处理。\n\n请检查以下几点：\n1. 确认使用的评估脚本是否为最新的 `scripts\u002Feval\u002Feval_audio_video_AVQA.sh`。\n2. 检查数据加载逻辑，确保没有因为文件路径错误、格式不支持或过滤条件过严而丢失样本。\n3. 对比官方论文中的数据预处理步骤，确保视频帧采样率和音频处理参数一致。\n只有当测试样本总数达到 9185 且处理逻辑无误时，才能复现论文中的高精度结果。","https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues\u002F135",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},32529,"是否可以将 LoRA 适配器权重合并到基座模型中进行部署？","虽然理论上可以合并权重，但官方推荐直接使用最新代码库提供的“直接加载 LoRA”功能，这样更简便且不易出错。如果你确实需要合并权重（例如为了在某些不支持动态加载 LoRA 的推理引擎中部署），可以参考 HuggingFace PEFT 库的标准合并方法：\n\n```python\nfrom peft import PeftModel\nfrom transformers import AutoModelForCausalLM\n\nbase_model = AutoModelForCausalLM.from_pretrained(\"DAMO-NLP-SG\u002FVideoLLaMA2-7B\")\nlora_model = PeftModel.from_pretrained(base_model, \"path\u002Fto\u002Flora\u002Fweights\")\nmerged_model = lora_model.merge_and_unload()\nmerged_model.save_pretrained(\"path\u002Fto\u002Fsave\u002Fmerged\u002Fmodel\")\n```\n但在 VideoLLaMA2 项目中，优先建议使用 `model_init('path\u002Fto\u002Flora\u002Fdir')` 的方式直接加载，详见 Issue #32 和 #83 的讨论。","https:\u002F\u002Fgithub.com\u002FDAMO-NLP-SG\u002FVideoLLaMA2\u002Fissues\u002F59",{"id":134,"question_zh":135,"answer_zh":136,"source_url":127},32530,"在进行多卡评估（Evaluation）时，如何正确配置环境变量和脚本？","进行多卡评估时，需要使用 `CUDA_VISIBLE_DEVICES` 指定可用的 GPU 编号，并运行官方提供的评估脚本。例如，使用 8 张显卡评估 AVQA 任务的命令如下：\n\n```bash\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts\u002Feval\u002Feval_audio_video_AVQA.sh\n```\n\n请确保：\n1. 所有指定的 GPU 显存充足。\n2. 脚本内部的模型路径和数据集路径已根据你的实际环境修改。\n3. 如果是自定义数据集评估，需调整脚本中的 `--data-path` 和 `--result-path` 参数。",[]]